url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
http://irememberthismovie.com/astronautssci-fi-movie-wplanet-that-has-different-surrface-time-to-that-of-its-orbit/
|
code
|
From what I remember it involved 3-4 astronauts. One stays in orbit (a black man), while the others go to the surface of this planet.
The ones on the surface are there to retrieve data (from probes perhaps?). The astronauts on the surface only have a limited time to collect this data and get back to the ship in orbit. I think something happens to shorten their time there (a storm maybe?). During this scene they were surrounded by water if I remember correctly.
The female (possibly the crew leader?) keeps saying she can finish the data collection in time because it’ll be ages before they can get another chance. She says this despite the warnings from her comrades to just leave. Well, they miss their chance to get back to the ship in orbit.
Now the kicker is that the time on the planet’s surface is vastly different to the time in orbit. For instance, on the surface perhaps only a few hours pass, but in orbit several years pass. I think it ends up being something like 20+ years to the man in orbit before the surface crew gets back.
Anyone know what this movie is? I watched it in the last 3 years, but I streamed it so the movie itself could be older. Although I don’t think it would be more then say ten years old. It’s in colour, an American film in English.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591140.45/warc/CC-MAIN-20180719144851-20180719164851-00013.warc.gz
|
CC-MAIN-2018-30
| 1,281
| 5
|
https://jobmote.com/job/108765/full-stack-engineer-100-remote/
|
code
|
Minimum Required Skills:
If you are a Full Stack Engineer with experience, please read on!
Top Reasons to Work with Us
- HUGE Room for Growth
- Great Work/Life Balance/Autonomy
- Competitive Pay
What You Will Be Doing
As a Full Stack Developer, you'll be working with us in all aspects of the product, from its core infrastructure to its front-end. As a part of the development team, you'll wear multiple hats, turn ambiguity into details, take the lead on building complex features and continuously find opportunities to improve performance and increase reliability.
What You Need for this Position
At Least 3 Years of experience and knowledge of:
Nice to have:
- Devops Experience
What's In It for You
- Relocation/Remote Option!
- 401kSo, if you are a Full Stack Engineer with experience, please apply today!
Applicants must be authorized to work in the U.S.
Security Clearance will be needed - therefore, Those authorized to work in the United States without sponsorship are encouraged to apply.s can be considered.Please apply directly to by clicking 'Click Here to Apply' with your Word resume!
Looking forward to receiving your resume and going over the position in more detail with you.
- Not a fit for this position? Click the link at the bottom of this email to search all of our open positions.
Looking forward to receiving your resume!
CyberCoders, Inc is proud to be an Equal Opportunity Employer
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.
Your Right to Work - In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire.
Copyright 1999 - 2019 . CyberCoders, Inc. All rights reserved.
- provided by Dice
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530857.12/warc/CC-MAIN-20191211103140-20191211131140-00302.warc.gz
|
CC-MAIN-2019-51
| 1,947
| 25
|
https://www.experts-exchange.com/questions/27634095/CAS-Array-Cannot-open-your-default-e-mail-folders.html
|
code
|
I'm setting up an Exchange 2010 infrastructure, and am in the lucky situation of having four Exchange servers. My company has a single domain and one site. Two of the servers are running the Client Access Server and Hub Transport roles, and the other two servers are running the Mailbox roles and a DAG. Originally all three roles were running on the two latter servers, a CAS Array was created and I was going to use Hardware Load Balancers, but instead I decided to buy two new servers and transferred the CAS and Hub Transport roles to these, and am using Windows Load Balancing.
I have some test 2010 mailboxes and get the message "Cannot open your default e-mail folders. You must connect to Microsoft Exchange with the current profile before you can synchronize your folders with your offline folder file." when setting up an Outlook profile.
Microsoft Knowledge Base article 982678 does not apply because all the existing mailbox databases have been added to the CAS Array.
Can anyone help?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00614.warc.gz
|
CC-MAIN-2023-14
| 997
| 4
|
https://klkuttler.com/books/LinearAlgebraAndAnalysis/x1-8900011.1.2
|
code
|
11.1.2 Cauchy Sequences, Completeness
Of course it does not go the other way. For example, you could let xn =
and it has a
convergent subsequence but fails to converge. Here d
and the metric space is just
However, there is a kind of sequence for which it does go the other way. This is called a Cauchy
is called a Cauchy sequence if for every ε >
0 there exists N such that if
m,n ≥ N, then
Now the major theorem about this is the following.
Theorem 11.1.16 Let
be a Cauchy sequence. Then it converges if and only if any subsequence
This was just done above.
⇐= Suppose now that
is a Cauchy sequence and lim
. Then there exists N1
if k > N1,
From the definition of what it means to be Cauchy, there exists N2
that if m,n ≥ N2,
2. Let N ≥
. Then if
k ≥ N,
then nk ≥ N
It follows from the definition that limk→∞xk = x. ■
Definition 11.1.17 A metric space is said to be complete if every Cauchy sequence converges.
Another nice thing to note is this.
Proposition 11.1.18 If
is a sequence and if p is a limit point of the set S
then there is a subsequence
Proof: By Theorem 11.1.7, there exists a sequence of distinct points of S denoted as
none of them equal
. Thus B
contains infinitely many different points of the set
, this for every r.
Let xn1 ∈ B
is the first index such that xn1 ∈ B
have been chosen, the ni
increasing and let 1 > δ1 > δ2 >
where xni ∈ B
Let xnk+1 ∈ B
is the first index such that xnk+1
is contained B
Another useful result is the following.
Lemma 11.1.19 Suppose xn → x and yn → y. Then d
Proof: Consider the following.
and the right side converges to 0 as n →∞. ■
First are some simple lemmas featuring one dimensional considerations. In these, the metric space is ℝ
and the distance is given by
First recall the nested interval lemma. You should have seen something like it in calculus, but
this is often not the case because there is much more interest in trivialities like integration
Lemma 11.1.20 Let
for all k
. Then there exists a point p in
Proof: We note that for any k,l,ak ≤ bl. Here is why. If k ≤ l, then
If k > l, then
It follows that for each l,
Hence supkak is a lower bound to the set of all bl and so it is no larger than the greatest lower bound. It
Pick x ∈
. Then for every
k,ak ≤ x ≤ bk
. Hence x ∈∩k=1∞
Lemma 11.1.21 The closed interval
is compact. This means that if there is a collection of
open intervals of the form
whose union includes all of
, then in fact
is contained in
the union of finitely many of these open intervals.
Proof: Let C be a set of open intervals the union of which includes all of
to admit a finite subcover. That is, no finite subset of
has union which contains
. Then this must be
the case for one of the two intervals
be the one for which this is so. Then split
it into two equal pieces like what was just done and let I2
be a half for which there is no finite subcover of
sets of C
. Continue this way. This yields a nested sequence of closed intervals I1 ⊇ I2 ⊇
and by the
above lemma, there exists a point x
in all of these intervals. There exists U ∈C
such that x ∈ U.
However, for all n large enough, the length of In is less than min
actually contained in
contrary to the construction. Hence
is compact after all.
As a useful corollary, this shows that ℝ is complete.
Corollary 11.1.22 The real line ℝ is complete.
is a Cauchy sequence in
. Then there exists M
Why? If there is no convergent subsequence, then for each x ∈
there is an open set
for only finitely many values of k
is compact, there are
finitely many of these open sets whose union includes
. This is a contradiction because
for all k ∈ ℕ
so at least one of the open sets must contain xk
for infinitely many k.
is a convergent subsequence. By Theorem 11.1.16
the original Cauchy sequence converges to some
Example 11.1.23 Let n ∈ ℕ. ℂn with distance given by
is a complete space. Recall that
. Then ℂn is complete. Similarly ℝn is
To see that this is complete, let
be a Cauchy sequence. Observe that for each j,
That is, each component is a Cauchy sequence in ℂ
is a Cauchy sequence. Similarly
is a Cauchy sequence. It
follows from completeness of ℝ
shown above, that these converge. Thus there exists aj,bj
and so xk → x showing that ℂn is complete. The same argument shows that ℝn is complete. It is easier
because you don’t need to fuss with real and imaginary parts.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578655155.88/warc/CC-MAIN-20190424174425-20190424200425-00206.warc.gz
|
CC-MAIN-2019-18
| 4,411
| 113
|
https://semprag.org/index.php/sp/article/view/sp.14.4
|
code
|
Main Article Content
Two mouse-tracking experiments tested predictions from two different models of scalar implicature as to whether exhaustive interpretations are computed prior to ignorance implicatures. We use different German intonational patterns to probe the availability of these interpretations (Experiments 1 and 2) and add a speaker competence manipulation in Experiment 2. Results from Experiment 1 found that deriving exhaustive interpretations with an L+H* was delayed to ignorance implicatures with an L*+H contour. Experiment 2 replicated this finding even with a strengthened competence assumption about the speaker. We interpret our processing data as providing constraints on the computational mechanisms underlying the interpretation of scalar implicatures.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817073.16/warc/CC-MAIN-20240416062523-20240416092523-00538.warc.gz
|
CC-MAIN-2024-18
| 776
| 2
|
https://www.mapleprimes.com/questions/203071-Can-Maple-Give-Answer-For-Intcosxnx-
|
code
|
Was trying to see if I can get the reduction formulas for int(cos(x)^n,x) in maple. But it seems no assumption used can make Maple give any result for this. Mathematica gives a result using Hypergeometric2F1 (even with no assumption on n, which I am not sure about now), but was wondering why maple can't do this one:
int( (cos(x))^n,x) assuming n::integer;
int( (cos(x))^n,x) assuming n::posint;
In Mathematica, I get:
I am newbie in Maple, so may be I am missing some command or doing something wrong.
ps. I was trying to obtain
But this is lost case now. I just need to find out first why int(cos(x)^n,x) does not evaluate to anything in Maple.
fyi, the Hypergeometric result for $\int cos^n(x) \,dx$ can be seen in this reference (half way down the page):
ps. can't one enter Latex in this forum like at stack exchange?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500035.14/warc/CC-MAIN-20230202165041-20230202195041-00491.warc.gz
|
CC-MAIN-2023-06
| 823
| 9
|
https://www.crookedstaff.co.uk/2021/03/
|
code
|
It's been a bit of a rough month or so, and I wasn't sure if I'd be able to get anything released in March - as right at the end of February this happened:
But, as you can tell, I bounced back (eventually) ...and I couldn't wait to get back to the crafting table and get to work on these lava tiles:
And as you can see, this whole concept could be used to make pits, flooded areas, and so on - just by using a different texture in place of the lava (note that I plan on making a 'tips' video on this).
Anyway, as usual, here is the video that shows how I put it all together...
...and the (pay-what-you-want) pdf file can be found HERE.
So, here's hoping you can put it to good use!!!
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816954.20/warc/CC-MAIN-20240415080257-20240415110257-00043.warc.gz
|
CC-MAIN-2024-18
| 684
| 6
|
https://git.stg.centos.org/source-git/libsass/blob/7770af128218a0c094215bcb589909b9bf339804/f/docs/setup-environment.md
|
code
|
In order to install and setup your local development environment, there are some prerequisites:
OS X: First you'll need to install XCode which you can now get from the AppStore installed on your mac. After you download that and run it, then run this on the command line:
First, clone the project and then add a line to your
~/.bash_profile that will let other programs know where the LibSass dev files are.
git clone firstname.lastname@example.org:sass/libsass.git cd libsass echo "export SASS_LIBSASS_PATH=$(pwd)" >> ~/.bash_profile
Then, if you run the "bootstrap" script, it should clone all the other required projects.
You should now have a
sassc folder within the libsass folder. Both of these are clones of their respective git projects. If you want to do a pull request, remember to work in those folders. For instance, if you want to add a test (see other documentation for how to do that), make sure to commit it to your fork of the sass-spec github project. Also, whenever you are running tests, make sure to
pull from the origin! We want to make sure we are testing against the newest libsass, sassc, and sass-spec!
Now, try and see if you can build the project. We do that with the
At this point, if you get an error, something is most likely wrong with your compiler installation. Yikes. It's hard to cover how to fix this in an article. Feel free to open an issue and we'll try and help! But, remember, before you do that, googling the error message is your friend! Many problems are solved quickly that way.
Then, to run the spec against LibSass, just run:
If you get an error about
SASS_LIBSASS_PATH, you may still need to set a variable pointing to the libsass folder, like this:
...where the latter part is to the
libsass directory you've cloned. You can get this path by typing
pwd in the Terminal
Go into the sass-spec folder that should have been cloned earlier with the "bootstrap" command. Run the following.
bundle install ./sass-spec.rb
Voila! Now you are testing against Sass too!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646181.29/warc/CC-MAIN-20230530230622-20230531020622-00618.warc.gz
|
CC-MAIN-2023-23
| 2,007
| 20
|
https://gicseh.com/microsoft-windows-server-2019.php
|
code
|
Windows Server 2019 is the latest version of the server operating system by Microsoft, as part of the Windows NT family of operating systems. Basically the difference between windows and windows server is that windows is Client Operating System whereas Windows Server is the network based operating system.
So in client based operating system means there are users who uses its services in order to accomplish their task. The task can be anything like programming, gaming, video editing, making scripts etc. On the other hand windows server provides services to the users. It uses a network or domain by which we can deploy our own servers and we can use them from anywhere for our benefits. If we install windows server in any desktop then that desktop will going to be act like a server where we can host our own server and from there we can host our own website. We can host servers in windows client OS also but the problem will be in client OS the bandwidth is less that the server so it will be a bit slow than the server. On the other hand if we talk about licensing of windows server and windows client OS then windows server supports ten times more connections than windows client. So in server related work obviously windows server is much important.
There were many previous version of windows server available like server 2002,2005,2008,2016 etc. but the latest released by Microsoft is Server 2019 with lots of features and updates.
Windows Server 2019 is the operating system that bridges on-premises environments with Azure, adding additional layers of security while helping you modernise your applications and infrastructure.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099892.46/warc/CC-MAIN-20231128151412-20231128181412-00462.warc.gz
|
CC-MAIN-2023-50
| 1,642
| 4
|
https://stackoverflow.com/questions/13207450/permissionerror-errno-13-in-python
|
code
|
Just starting to learn some Python and I'm having an issue as stated below:
a_file = open('E:\Python Win7-64-AMD 3.3\Test', encoding='utf-8') Traceback (most recent call last): File "<pyshell#9>", line 1, in <module> a_file = open('E:\Python Win7-64-AMD 3.3\Test', encoding='utf-8') PermissionError: [Errno 13] Permission denied: 'E:\\Python Win7-64-AMD 3.3\\Test\
Seems to be a file permission error, if any one can shine some light it would be greatly appreciated.
NOTE: not sure how Python and Windows files work but I'm logged in to Windows as Admin and the folder has admin permissions.
I have tried changing
.exe properties to run as Admin.
Testa file or a folder?
bor any other letter that can be part of an escape sequence...
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00423.warc.gz
|
CC-MAIN-2023-14
| 733
| 8
|
https://exploit-notes.hdks.org/exploit/linux/privilege-escalation/update-motd-privilege-escalation/
|
code
|
Update-Motd Privilege Escalation
Last modified: 2023-02-17
/etc/update-motd.d/ is used to generate the dynamic message of the day (MOTD) that is displayed to users when they log in to the system. If we can modify files listed in the directory, we can inject malicious script to escalate privileges.
ls -al /etc/update-motd.d/
If we have permission to modify files in this directory, we can inject arbitrary code and execute when logging in.
Run the following code to copy bash binary and give
suid to this file.
<username> with your current user name.
echo "cp /bin/bash /home/<username>/bash && chmod u+s /home/<username>/bash" >> /etc/update-motd.d/00-header
After that, log out and log in again with SSH. The above script should be executed.
Now execute the following command under
We should get a root shell.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510412.43/warc/CC-MAIN-20230928130936-20230928160936-00118.warc.gz
|
CC-MAIN-2023-40
| 812
| 12
|
https://mx.coursera.org/courses?query=neural%20networks&page=3&index=prod_all_launched_products_term_optimization
|
code
|
Habilidades que obtendrás: Artificial Neural Networks, Computer Programming, Machine Learning, Statistical Programming, Computer Vision, Data Science, Deep Learning, Marketing, Mobile Development, Tensorflow
Intermediate · Guided Project · Less Than 2 Hours
Habilidades que obtendrás: Machine Learning, Deep Learning, Tensorflow, Artificial Neural Networks, Computer Vision, Data Science, Computer Programming, Python Programming, Statistical Programming, General Statistics, Natural Language Processing, Probability & Statistics, Applied Machine Learning, Business Psychology, Entrepreneurship, Forecasting, Communication, Machine Learning Algorithms, Marketing, Calculus, Data Visualization, Mathematics, Programming Principles, Statistical Machine Learning, Computer Graphic Techniques, Computer Graphics, Machine Learning Software
Intermediate · Professional Certificate · 3-6 Months
Habilidades que obtendrás: Machine Learning, Computer Programming, Python Programming, Computer Vision, Deep Learning, Statistical Programming, Artificial Neural Networks, Machine Learning Algorithms, Probability & Statistics, General Statistics, Regression, Applied Machine Learning, Apache, Data Management, Data Mining, Data Analysis, Statistical Analysis, Big Data, Algorithms, Theoretical Computer Science, Statistical Machine Learning, Computer Graphics, Dimensionality Reduction, Tensorflow, Computer Graphic Techniques, Basic Descriptive Statistics, Business Analysis, Correlation And Dependence, Databases, Mathematics, NoSQL, SQL, Econometrics, Estimation, Entrepreneurship, Machine Learning Software, Probability Distribution, Data Science, Data Structures, IBM Cloud, Supply Chain Systems, Supply Chain and Logistics
Intermediate · Professional Certificate · 3-6 Months
Habilidades que obtendrás: Probability & Statistics, Machine Learning, Bayesian Network, General Statistics, Markov Model, Bayesian Statistics, Probability Distribution, Computer Architecture, Distributed Computing Architecture, Leadership and Management, Other Programming Languages, Computer Programming, Machine Learning Algorithms, Statistical Machine Learning, Applied Machine Learning, Correlation And Dependence, Behavioral Economics, Business Psychology, Data Analysis, Graph Theory, Mathematics, Algebra, Geovisualization
Advanced · Specialization · 3-6 Months
Habilidades que obtendrás: Data Science, Data Structures, SQL, Computer Programming Tools, Data Analysis Software, Machine Learning Software, Software Visualization, Statistical Programming, Databases, Python Programming, Database Theory, Data Visualization Software, R Programming, Data Management, Data Mining, Regression, Devops Tools, Machine Learning Algorithms, SPSS, Basic Descriptive Statistics, Data Analysis, Database Application, Big Data, Computer Programming, Database Administration, Deep Learning, General Statistics, Machine Learning, Marketing, Probability & Statistics, Storytelling, Writing
Beginner · Specialization · 3-6 Months
Habilidades que obtendrás: Data Science, Cloud Computing, Applied Machine Learning, Cloud Engineering, Cloud Infrastructure, Data Mining, Regression, Cloud Applications, Cloud Management, Cloud Platforms, Cloud Storage, DevOps, IBM Cloud, Network Security, Software As A Service, Software Engineering, Basic Descriptive Statistics, Data Analysis, Big Data, BlockChain, Computer Architecture, Computer Graphics, Computer Programming, Computer Vision, Deep Learning, Finance, General Statistics, Human Computer Interaction, Interactive Design, Machine Learning, Machine Learning Algorithms, Operating Systems, Probability & Statistics, Security Engineering, Software Architecture, Software Framework, Storytelling, System Programming, Theoretical Computer Science, Writing
Beginner · Specialization · 1-3 Months
Neural networks, also known as neural nets or artificial neural networks (ANN), are machine learning algorithms organized in networks that mimic the functioning of neurons in the human brain. Using this biological neuron model, these systems are capable of unsupervised learning from massive datasets.
This is an important enabler for artificial intelligence (AI) applications, which are used across a growing range of tasks including image recognition, natural language processing (NLP), and medical diagnosis. The related field of deep learning also relies on neural networks, typically using a convolutional neural network (CNN) architecture that connects multiple layers of neural networks in order to enable more sophisticated applications.
For example, using deep learning, a facial recognition system can be created without specifying features such as eye and hair color; instead, the program can simply be fed thousands of images of faces and it will learn what to look for to identify different individuals over time, in much the same way that humans learn. Regardless of the end-use application, neural networks are typically created in TensorFlow and/or with Python programming skills.
Neural networks are a fundamental concept to understand for jobs in artificial intelligence (AI) and deep learning. And, as the number of industries seeking to leverage these approaches continues to grow, so do career opportunities for professionals with expertise in neural networks. For instance, these skills could lead to jobs in healthcare creating tools to automate X-ray scans or assist in drug discovery, or a job in the automotive industry developing autonomous vehicles.
Professionals dedicating their careers to cutting-edge work in neural networks typically pursue a master’s degree or even a doctorate in computer science. This high-level expertise in neural networks and artificial intelligence are in high demand; according to the Bureau of Labor Statistics, computer research scientists earn a median annual salary of $122,840 per year, and these jobs are projected to grow much faster than average over the next decade.
Absolutely - in fact, Coursera is one of the best places to learn about neural networks, online or otherwise. You can take courses and Specializations spanning multiple courses in topics like neural networks, artificial intelligence, and deep learning from pioneers in the field - including deeplearning.ai and Stanford University. Coursera has also partnered with industry leaders such as IBM, Google Cloud, and Amazon Web Services to offer courses that can lead to professional certificates in applied AI and other areas. You can even learn about neural networks with hands-on Guided Projects, a way to learn on Coursera by completing step-by-step tutorials led by experienced instructors.
Before starting to learn neural networks, it's important to have experience creating and using algorithms since neural networks run on complicated algorithms. You should also have fundamental math skills at least, but you'll be at a better advantage if you have knowledge of linear algebra, calculus, statistics, and probability. Being proficient at problem-solving is also important before starting to learn neural networks. An understanding of how the human brain processes information is helpful since artificial neural networks are patterned after how the brain works. You'll also benefit from having experience using any programming language, in particular Java, R, Python, or C++. This includes experience using these languages' libraries, which you'll access to apply the algorithms used in neural networks.
People who are best suited for roles in neural networks are innovative, interested in technology, and have the ability to identify patterns in large amounts of data and draw conclusions from them. People who have a desire to make life and work easier for human beings through artificial technology are well suited for roles in neural networks too. Also, people who have good programming skills and data engineering skills like SQL, data analysis, ETL, and data visualization are likely well suited for roles in neural networks.
If you are interested in the field of artificial intelligence, learning about neural networks is right for you. If your current or future position involves data analysis, pattern recognition, optimization, forecasting, or decision-making, you might also benefit from learning neural networks. Neural networks are also used in image recognition software, speech synthesis, self-driving vehicles, navigation systems, industrial robots, and algorithms for protecting information systems, so if you're interested in these technologies, learning neural networks may be helpful to you.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00739.warc.gz
|
CC-MAIN-2023-14
| 8,567
| 21
|
http://sourceforge.net/directory/development/build/developmentstatus:planning/license:mpl/
|
code
|
Showing page 1 of 1.
Generate Pascal and/or C code starting from a simple HTML-like file. You insert then the output in your program and with a simple call to a function you'll see on the screen the linked HTML! In the future the format will be HTML/XML.3 weekly downloads
An OIDE (Online Integrated Development Environment). Lite, easy to use. fast to install. Easily extensible API with support for CVS and SVN.0 weekly downloads
Snippit Pad is an enhanced notepad with syntax highlighting and the ability to insert code snippits from an attached database.0 weekly downloads
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657138017.47/warc/CC-MAIN-20140914011218-00271-ip-10-234-18-248.ec2.internal.warc.gz
|
CC-MAIN-2014-41
| 576
| 4
|
https://blogs.office.com/en-us/2005/12/29/pivottables-9-great-support-for-sql-server-analysis-services/
|
code
|
Today, Iâll start a series of articles on the improvements weâve made to PivotTables connected to OLAP (OnLine Analytical Processing) data sources, specifically Microsoft SQL Server Analysis Services models (in addition to its relational database product, SQL Server includes a feature named Analysis Services which provides business intelligence and data mining capabilities). Excel has worked with SQL Server Analysis Services for several versions now, but we have put a lot of time and effort into Excel 12 in order to make it a great front end to SQL Server Analysis Services, especially Microsoft SQL Server 2005 Analysis Services (Microsoft SQL Server 2005 Analysis Services was recently released as part of Microsoft SQL Server 2005 and introduced many new, powerful features for analyzing data ⦠for more information on Analysis Services, please take a look here and here).
Before I launch into discussing how Excel 12 works with SQL Server Analysis Services, I wanted to summarize what I see as several key benefits to using Analysis Services as a tool for working with business data.
- Friendliness. Business data is typically stored in relational databases optimized for data input or storage and not analysis of that data. Names of columns etc. are typically not intuitive to end users, there are no clear relationships between fields, etc. Analysis Services provides a user-friendly model where you can provide understandable business names, specify relationships between fields (Product Category â Product Subcategory â Product) so that it is possible for business users to design their own reports without help from IT.
- Personalization. Analysis Services offers tools for personalizing individual usersâ reporting experience by only showing them the data that they care about and have permissions to see; in addition, Analysis Services can translate data into usersâ preferred languages.
- Analytical capabilities. Key Performance Indicators, calculations, conditional formatting, and actions are just a few examples of business logic that you can define once in Analysis Services and then expose automatically in Excel PivotTables. Part of the beauty of this is that all users see the same thing in their PivotTables because the formatting, for example, is calculated in one place â on the server.
- Fast analysis. Analysis Services aggregates data so that analytical queries that might take minutes when executed against a relational database are typically executed in less than a second with Analysis Services.
- One consolidated analytical model. Analysis Services allows you to consolidate data from different business systems into a single analytical model. For example, you might have some sales data in an Oracle database and some customer data in a SQL Server database but for analysis that you would like to see in the same report. With an Analysis Services model, you can do just that without needing to change the source system at all.
- One version of the truth. When analyzing data in Analysis Services, all the business logic is centrally managed in one analytical model so that every user will see the same numbers, calculated using the same business logic. Any changes made to the model will immediately be available to all Excel PivotTable users when they update their report. No more worrying that different users with different copies of the spreadsheet have different financial results.
All that said, letâs return to Excel 12, and take a look at what the PivotTable Field List looks like when connected to an Analysis Services 2005 model.
When connected to Analysis Services, a PivotTable exposes three types of fields â âmeasuresâ, or the numbers (like âsalesâ and âprofitâ) that appear on your PivotTables, as well as âKPIsâ and âdimensionsâ (both discussed below). Measures can be grouped together in Analysis Services (by the person that designs the model) into something called âmeasure groupsâ. In the Excel 12 field list, each measure group has a âsigmaâ icon to communicate to the user that the fields in the group are numerical and that they belong in the Values area of the PivotTable. Measure groups essentially represent different sets of business metrics available for analysis; typically a measure group contains related measures from the same business application. In the image below, the Exchange Rates measure group folder is open and there are two measures listed which can be added to the PivotTable â Average Rate and End of Day Rate.
Key Performance Indicators (KPIs)
Below the measure group folders are is a KPI folder (assuming KPIs have been defined in an Analysis Services model). This folder contains Key Performance Indicators defined on the Analysis Services server. (Key Performance Indicators are a big subject unto themselves â for the sake of this article, suffice to say that they track key business metrics and that they are defined in Analysis Services). The different components of a KPI (Value, Goal, Status and Trend) can be added to the Values area of the PivotTable so you can track the latest values of your key business metrics. Here is a screenshot of the KPIs folder … in the image, the Product Gross Margins KPI is open and all you have to do to add the Value, Goal, Status or Trend of the KPI to the PivotTable is to check the checkbox next to it.
KPIs in PivotTables are quite interesting – Iâll cover PivotTable KPI support in more detail in an upcoming post.
Finally, the dimensions of the Analysis Services model are listed in the PivotTable field list. (Dimensions are the different attributes that you can use to slice and dice your data, like time, geography, customer, product, etc.) In the screenshot below, the Customer dimension folder is open and you can see the customer-related fields available in the Analysis Services model.
Organizing the field list
Within the measure group folders, the KPIs folder and the dimension folders, the person that authors the Analysis Services model can set up subfolders to organize the data in an intuitive way, making it much easier for business users to navigate the field list. In the screen shot above, an example would be the Contacts and Location folders. These folders are defined on the Analysis Services; Excel picks them up when initializing the PivotTable Field List.
For those of you that are familiar with SQL Server 2005 Analysis Services, the field list will show both user hierarchies (like Customer Geography in the example) and attribute hierarchies (like Email Address in the example). If you do not specify any folder for an attribute hierarchy on the server, we will display it in a special âMore Fieldsâ folder under the dimension where it belongs. We do this since there are typically many attribute hierarchies (often one per column of each table in the source database), and listing them at the top level makes it hard to navigate the field list.
Focusing the information in the field list
When a PivotTable is connected to SQL Server 2005 Analysis Services, at the top of the PivotTable Field List, there is a drop down where the user can select which measure group you want to work with. In many cases, you only need the measures from one measure group for a report, and this drop down allows you to filter out all the other measure groups as well as KPIs and dimensions that are not related to the measure group you select. This can have the effect of reducing the number of fields visible in the field list making it much easier to build your analysis.
To illustrate this with an example, Iâll pick the Financial Reporting measure group.
And here is the resultant field list, filtered to only show information related to Financial Reporting. Now there is only one measure group folder visible and significantly fewer dimensions, it is much simpler for me to find the fields I need.
Perspectives in PivotTables
One feature available in SQL Server 2005 Analysis Services is the idea of a âperspectiveâ. To crib from the Analysis Services website, a large Analysis Services model can present to the user a large number of dimensions, measure groups, measures, and KPIs and may be challenging to navigate, even with the ability to filter the field list based on a measure group discussed previously. A perspective, which is defined in the Analysis Services model, creates a subset “view” of a cube â essentially, model designers can create perspectives that only contain the information needed for a given purpose.
Excel 12 supports perspectives; once a user has connected to a perspective (which to Excel 12 looks just like any other data source), the PivotTable Field List will only show the measure groups included in the perspective inside the âShow fields related to:â drop down, and selecting (All) in the drop-down will only show the user the fields included in the perspective.
Hierarchies make exploration easy
The last Analysis Services feature I will cover today are hierarchies. One of the advantages of PivotTables based on Analysis Services models is that you can set up hierarchies within each dimension. Hierarchies help users navigate the data intuitively and correctly. To users, a hierarchy defines relations between fields ⦠letâs look at an example. In the screenshot below, Iâve expanded the Customer Geography hierarchy to show the individual fields (or levels) it contains.
In this example there are five levels, so when I add Customer Geography to the PivotTable by clicking the checkbox for it, Iâm actually adding five fields at once (for non-Analysis Services data sources, you have to add multiple fields in the right order to get the same report, and it might not always be obvious which fields to pick.). This gives me the opportunity to expand countries to see states etc. without having to also add the four other fields to the PivotTable. After Iâve added Customer Geography to the PivotTable, I can explore the hierarchy by clicking the expand indicator (â+â) for Australia in the PivotTable, which shows me the next level of detail (âState-Provinceâ).
The new Excel 12 expand/collapse indicators (discussed in a previous post) appear automatically for hierarchies to make it very easy to determine when there are details to expand or collapse. For example, I could use the expand indicators to further expand to see âCityâ, âPostal Codeâ, etc.
PS Updated 4/06 to correct a few minor points
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106996.2/warc/CC-MAIN-20170820223702-20170821003702-00649.warc.gz
|
CC-MAIN-2017-34
| 10,443
| 29
|
https://www.rollbar.com/knowledge-base/manage-rollbar-automatically-through-the-rollbar-terraform-provider/
|
code
|
See Continuous Code Improvement in Action. Join Rollbar's live product demo!
April 8th, 2021 • By Gabriella Papp
The Rollbar account administration is critical to get the most out of Rollbar and to maintain data visibility across teams. However, this process can be tedious for large and fast-growing accounts. Users are required to manually support provisioning and management of Rollbar Accounts (using the UI or the APIs). Fortunately, the Rollbar Terraform Provider offers an automated way!
Terraform is a multi-cloud provisioning product used to create, manage, and update infrastructure resources. The Provider will automate the creation, modification, and removal of resources within your account such as projects, users, and teams.
The Terraform Provider is a declarative framework - which means that you can describe the end state that you want to achieve without stating the exact steps and ‘how’ to get there. It leverages the Rollbar API to make the changes necessary to reach and maintain its desired state. This way you can reduce the time it takes to provision and manage your Rollbar account, while cutting back on manual efforts and human error.
A Terraform integration, known as a Provider, provides a way to provision and manage a Rollbar Account. Instead of using the Ingestion API, it will use parts of the API that create, edit, and destroy Rollbar Accounts, Project, Teams, Access Tokens, etc.
With the Rollbar Terraform Provider you will be able to:
Manage projects and users with ease
Meet security requirements easily
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991537.32/warc/CC-MAIN-20210513045934-20210513075934-00515.warc.gz
|
CC-MAIN-2021-21
| 1,549
| 9
|
https://www.ninjatb.com/howtofindkeywordsforawebsite/how-to-find-words-in-a-book-2019-how-to-find-keywords-using-google-adwords-2019.html
|
code
|
If you already create niche websites, you can clearly see how everything I’ve discussed above will benefit you. In fact, you probably already understood all of it. But maybe you are a blogger who has never thought of creating a niche website. Is this you? Let me tell you – if you can create a blog, you can create a niche website. It is not hard at all to create a small site, laser focused on a very specific topic, that will earn at least enough to make Long Tail Pro pay for itself.
It’s very easy to use, fast and smooth and gives you tons of Long Tail Keywords. You only pay for the web based software once, so there are never any monthly fees. I’ve also been impressed over the years at how easy it is to update. These updates are free and the creator (Clever Gizmos) is always moving with the ever changing world of SEO and updated API’s to keep his program up to speed and fresh.
* Please note our tool currently assumes Google having ~ 83% of the market, with Bing + Yahoo! splitting the remaining 17% of the market. Actual market conditions may vary significantly from that due to a variety of factors including: search location, search market demographics, how much marketshare mobile search has relative to desktop in that particular vertical, etc.
It arranges your search volume by separating the relevant keywords from those considered irrelevant. It presents those keywords, which it believes would help rank your search engine very high. Once you indicate that you want your keywords to be filtered, the tool would help to arrange that. You know that is very important for your Adsense and adwords campaign. The program is relevant for your CPC and other internet marketing strategies that you want to use.
Use the Keyword Planner to flag any terms on your list that have way too little (or way too much) search volume, and don't help you maintain a healthy mix like we talked about above. But before you delete anything, check out their trend history and projections in Google Trends. You can see whether, say, some low-volume terms might actually be something you should invest in now -- and reap the benefits for later.
But maybe you didn’t watch the tutorial video yet. Or maybe you didn’t know about the extremely powerful KC feature. Maybe you didn’t know how quickly, easily and effectively you could analyze Google top 10 search results with Long Tail Pro. And maybe you didn’t even know how profitable it can be to simply target just the really long tail keywords.
Search Analytics can be found under the 'Search Traffic' section and provides details of the keywords that drove clicks to your website, based on data for up to the last 90 days. The difference with Google's version is that you can filter the data to put extra context around the keywords, such as filtering by country to see keyword popularity based on country, which can be useful when carrying out keyword research for websites that service more than one country.
WordStream's Negative Keyword Tool reduces wasteful PPC spending and improves ROI by preventing your AdWords PPC ads from showing on irrelevant searches. Enter a keyword to get a list of negative keyword suggestions. Then select the ones that aren't relevant to your campaigns and export the results for use in your AdWords account. As a result, your ads will be more relevant to searchers, grab a much more targeted audience and reduce your overall ad spend.
More on this How to Do Keyword Research with SEMrush Keyword Magic Tool Post Maria Raybould SEMrush Toolkit for SEO Ebook Mar 16, 2018 Intent research: Confirming your keyword intent assumptions with Google ads Webinar recorded on Sep 20, 2018 Of course, knowing where to start can be difficult. Below you'll find the five best SEO keyword research tools I recommend for startups to begin a well-rounded keyword foundation for your campaigns.
1. AdWords Keyword Planner - It's still the standard, although Google keeps making changes that just aren't helpful. I get that they want us to treat closely-related keywords in such a way that we're not creating multiple pages when we should just have one, but I'd appreciate it if they'd still break down the volume for each keyword that makes up a group (or at least list the keywords they're clumping together into a group).
Successful SEO requires multiple interrelated activities on all fronts: competition, keywords, link building, on-page and technical optimization. It creates a need for diverse tools, which is expensive. SEMrush solves this problem with an award-winning all-in-one toolkit that includes 17 tools covering all SEO fields. We’ll walk you through your SEO workflow, explaining how to get the most out of our toolkit.
But it’s still very useful for getting search volume data (provided your account still shows this), which is helpful when choosing which of the many keywords you’ve found to focus on (although you should take these estimates with a pinch of salt, they are still useful in indicating the relative search volumes of different keywords, even if the absolute estimates are a little off).
We prefer and suggest Long Tail Pro to all our clients. Long Tail Pro is the best long tail keyword research software online. I use their software for researching keywords for everything from social media marketing bios, to page and blog post titles, to headings (H2, H3, etc.), to meta descriptions, to YouTube descriptions, to content/articles, and so much more!
Although more and more keywords are getting encrypted by Google every day, another smart way to come up with keyword ideas is to figure out which keywords your website is already getting found for. To do this, you'll need website analytics software like Google Analytics or HubSpot's Sources tool. Drill down into your website's traffic sources, and sift through you organic search traffic bucket to identify the keywords people are using to arrive at your site.
From the review above there is no doubt on LongTailPro benefits. You can grow your business quicker by making sure that the LongTailPro is your partner. The price may be high, but you can avail trial by clicking on the link below and all you need to do is to purchase. The LongTailPro download option is not available anymore. Other LongTailPro alternatives in the market today claim to be cheaper, but they can’t deliver. Drive traffic and earn more with the use of the LongTailPro tool.
2) Software project no 2 – invite your readers to participate and select 2-5 that will enter a mastermind group with you – and let the readers follow the progress – from brainstorming to hiring a coder, to beta testing, to “how to reach out to get sales” (I know you wrote a post on this – but would be great to tag along). And then those that are not part of the Mastermind group could be added to a forum/FB group and can then follow along and develop and ask each other for help.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528013.81/warc/CC-MAIN-20190722113215-20190722135215-00064.warc.gz
|
CC-MAIN-2019-30
| 6,924
| 16
|
http://help.octgn.net/support/discussions/topics/4000297937
|
code
|
Problems to run OCTGN. :-[
This is the message:
:( [color=blue]OCTGN.exe - Assert Failure[/color]
Expresion: [mscorlib recursive resource lookup bug]
Description: Infinite recursion during resource lookup within mscorlib. This may be a bug in mscorlib, or potentially in certain extensibity point such as assembly resolve events or CultureInfo names.
Resource name: Word_At
Here`s my log, any help would be appreciated.
Unfortunatly the log doesn't say anything to help solve the problem. Are you having the exact same error message Beta-Blocker?
not exactly, here's the complete text
Please look through this http://help.octgn.net/categories/57761/forums/228797/topics/4000274701. If that doesn't help let me know.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867995.55/warc/CC-MAIN-20180527024953-20180527044953-00507.warc.gz
|
CC-MAIN-2018-22
| 715
| 10
|
http://testkingaws.com/oracle/1z0-047/what-would-be-the-outcome-of-the-above-statement-2/
|
code
|
View the Exhibit and examine the structure of the ORDERS and ORDER_ITEMS tables.
In the ORDERS table, ORDER_ID is the PRIMARY KEY and ORDER_DATE has the DEFAULT value as SYSDATE.
Evaluate the following statement:
WHERE order_id IN (SELECT order_id FROM order_items WHERE qty IS NULL);
What would be the outcome of the above statement?
The UPDATE statement would not work because the main query and the subquery use different tables.
The UPDATE statement would not work because the DEFAULT value can be used only in INSERT statements.
The UPDATE statement would change all ORDER_DATE values to SYSDATE provided the current ORDER_DATE is NOT NULL and QTY is NULL.
The UPDATE statement would change all the ORDER_DATE values to SYSDATE irrespective of what the current ORDER_DATE value is for all orders where QTY is NULL.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659063.33/warc/CC-MAIN-20190117184304-20190117210303-00067.warc.gz
|
CC-MAIN-2019-04
| 819
| 9
|
http://setgetweb.com/p/WAS855/ae/rins_hostname.html
|
code
|
Host name values
WebSphere Application Server requires a host name specification during installation, profile creation, and for some configuration activities. This article describes acceptable values for host name fields.
The host name is the network name for the physical machine on which the node is installed. The host name must resolve to a physical network node on the server. When multiple network cards exist in the server, the host name or IP address must resolve to one of the network cards. Remote nodes use the host name to connect to and to communicate with this node. The following guidelines can help in determining the appropriate host name for the machine:
- Select a host name that other machines can reach within the network.
- Do not use the generic identifier, localhost, for this value.
- Do not attempt to install WebSphere Application Server products on a machine with a host name that uses characters from the double-byte character set (DBCS). DBCS characters are not supported when used in the host name.
- Avoid using the underscore (_) character in machine names. Internet standards dictate that domain names conform to the host name requirements described in Internet Official Protocol Standards RFC 952 and RFC 1123. Domain names must contain only letters (upper or lower case) and digits. Domain names can also contain dash characters ( - ) as long as the dashes are not on the ends of the name. Underscore characters ( _ ) are not supported in the host name. If we have installed WAS on a machine with an underscore character in the machine name, access the machine with its IP address until you rename the machine.
If we define coexisting nodes on the same computer with unique IP addresses, define each IP address in a domain name server (DNS) look-up table. Configuration files for standalone application servers do not provide domain name resolution for multiple IP addresses on a machine with a single network address.
The value specified for the host name is used as the value of the hostName property in configuration documents for the standalone application server. Specify the host name value in one of the following formats:
- Fully qualified domain name servers (DNS) host name string, such as xmachine.manhattan.ibm.com
- The default short DNS host name string, such as xmachine
- Numeric IP address, such as 127.1.255.3
The fully qualified DNS host name has the advantage of being totally unambiguous and also flexible. You have the flexibility of changing the actual IP address for the host system without having to change the Application Server configuration. This value for host name is particularly useful if you plan to change the IP address frequently when using Dynamic Host Configuration Protocol (DHCP) to assign IP addresses. A format disadvantage is being dependent on DNS. If DNS is not available, then connectivity is compromised.
The short host name is also dynamically resolvable. A short name format has the added ability of being redefined in the local hosts file so that the system can run the Application Server even when disconnected from the network. Define the short name to 127.0.0.1 (local loopback) in the hosts file to run disconnected. A format disadvantage is being dependent on DNS for remote access. If DNS is not available, then connectivity is compromised.
A numeric IP address has the advantage of not requiring name resolution through DNS. A remote node can connect to the node you name with a numeric IP address without DNS being available. A format disadvantage is that the numeric IP address is fixed. We must change the setting of the hostName property in Express configuration documents whenever you change the machine IP address. Therefore, do not use a numeric IP address if you use DHCP, or if you change IP addresses regularly. Another format disadvantage is that we cannot use the node if the host is disconnected from the network.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211167.1/warc/CC-MAIN-20180816191550-20180816211550-00202.warc.gz
|
CC-MAIN-2018-34
| 3,919
| 15
|
https://tomroelandts.com/index.php/articles/this-person-does-not-exist
|
code
|
The image below shows eight faces that were generated through a machine learning technique called a Generative Adversarial Network (GAN). The images were generated by the website https://thispersondoesnotexist.com/ (if that gives you a black page, try https://thispersondoesnotexist.com/image). As the URL of that website suggests, these faces are completely artificial, and do not correspond to existing persons!
Basically, a GAN is a combination of two neural networks. The first one (the generator) generates synthetic images, and the second one (the evaluator) tries to decide whether these images are fake or not. Initially, the evaluator is trained with a dataset of real images. After that, the two neural networks are trained as adversaries (hence, generative adversarial network). The first one tries to create images that fool the second one, and the second one tries to become better at spotting fake images. This technique seems to be able to generate some very convincing fake images.
The site https://thispersondoesnotexist.com/ was created to demonstrate the results of a new state-of-the-art technique for GANs. The academic paper that describes the algorithm is on arXiv at A Style-Based Generator Architecture for Generative Adversarial Networks, and there is also a very accessible web page with a description of how the algorithm works at Style-based GANs – Generating and Tuning Realistic Artificial Faces. I’ve embedded a YouTube video from that article below.
Apart from demonstrating the main feature of the novel technique, which is that it allows separately changing different aspects of the images (gender, age, etc.), the video also clearly demonstrates that these images are not at all generated by simply combining a few faces as you would do with image editing software.
And, yes, I have noticed that some of the faces on https://thispersondoesnotexist.com/ are sometimes off on a subtle or not so subtle level; in particular teeth and ears are sometimes a bit messed up… However, that doesn’t change that these state-of-the-art images are quite impressive, if you ask me.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818732.46/warc/CC-MAIN-20240423162023-20240423192023-00893.warc.gz
|
CC-MAIN-2024-18
| 2,111
| 5
|
https://community.spiceworks.com/topic/2298302-cisco-asa-allow-internet-on-a-new-interface
|
code
|
I want make a DMZ for run some VM (win10) for communicate on internet without restriction but I want this VMs are isolated from my production network.
I have create a new vSwitch on my vshpere environment and I have create a new VLAN on my switchs.
No DHCP, my VMs will have some static IP with public DNS (220.127.116.11 and 18.104.22.168)
On my ASA, I have configurated a new interface.
Actually, I can ping my gateway (switch) and my firewall but impossible to access to internet.
Firewall interface (MindGeek_DMZ) : 192.168.69.1
Gateway : 192.168.69.253
VM : 192.168.69.10
On my ASA (by ASDM, I know it's nooby to manage ASA by this way), I have setup my ACL and my NAT like that :
But I still don't have access to Internet...
Any idea where I have forget something?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103334753.21/warc/CC-MAIN-20220627134424-20220627164424-00382.warc.gz
|
CC-MAIN-2022-27
| 770
| 11
|
http://www.ba-bamail.com/content.aspx?emailid=2209
|
code
|
Up until now, whenever we wrote facebook comments, we could only use text to describe our feelings, such as :) | :( or :/
Facebook has just announced to its 1 Billion members that emoticons are finally part of the facebook package! Now you can use any of these text shortcuts to make a real face or other emoticon when writing a status, commenting to a friend, or reacting to something interesting. Emoticons are always a fun way to spruce up a conversation or status with some whimsy fun ;) Just type the text codes in any facebook comment to see the graphic result!
Get free access To all the posts in our new app!
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123560.51/warc/CC-MAIN-20170423031203-00530-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 616
| 3
|
https://rasterio.readthedocs.io/en/latest/topics/calc.html
|
code
|
Simple raster data processing on the command line is possible using Rasterio’s rio-calc command. It uses the snuggs Numpy S-expression engine. The snuggs README explains how expressions are written and evaluated in general. This document explains Rasterio-specific details of rio-calc and offers some examples.
Rio-calc expressions look like
(func|operator arg [*args])
func may be the name of any function in the module
one of the rio-calc builtins:
operator may be any of the standard Python arithmetic or logical operators.
The arguments may themselves be expressions.
Copying a file
Here’s a trivial example of copying a dataset. The expression
evaluates to all bands of the first input dataset, an array with shape
(3, 718, 791) in this case.
Note: rio-calc’s indexes start at
$ rio calc "(read 1)" tests/data/RGB.byte.tif out.tif
Reversing the band order of a file
(read i j) evaluates to the j-th band of the i-th input
asarray function collects bands read in reverse order into
an array with shape
(3, 718, 791) for output.
$ rio calc "(asarray (read 1 3) (read 1 2) (read 1 1))" tests/data/RGB.byte.tif out.tif
Stacking bands of multiple files
Bands can be read from multiple input files. This example is another (slower) way to copy a file.
$ rio calc "(asarray (read 1 1) (read 2 2) (read 3 3))" \ > tests/data/RGB.byte.tif tests/data/RGB.byte.tif tests/data/RGB.byte.tif \ > out.tif
Datasets can be referenced in expressions by name and single bands picked out
$ rio calc "(asarray (take a 3) (take a 2) (take a 1))" \ > --name "a=tests/data/RGB.byte.tif" out.tif
The third example, re-done using names, is:
$ rio calc "(asarray (take a 1) (take b 2) (take b 3))" \ > --name "a=tests/data/RGB.byte.tif" --name "b=tests/data/RGB.byte.tif" \ > --name "c=tests/data/RGB.byte.tif" out.tif
Read and take
take overlap a bit in the previous examples but
are rather different. The former involves I/O and the latter does not. You may
take from any array, as in this example.
$ rio calc "(take (read 1) 1)" tests/data/RGB.byte.tif out.tif
Arithmetic operations can be performed as with Numpy. Here is an example of scaling all three bands of a dataset by the same factors.
$ rio calc "(+ 2 (* 0.95 (read 1)))" tests/data/RGB.byte.tif out.tif
Here is a more complicated example of scaling bands by different factors.
$ rio calc "(asarray (+ 2 (* 0.95 (read 1 1))) (+ 3 (* 0.9 (read 1 2))) (+ 4 (* 0.85 (read 1 3))))" tests/data/RGB.byte.tif out.tif
Logical operations can be used in conjunction with arithemtic operations. In this example, the output values are 255 wherever the input values are greater than or equal to 40.
$ rio calc "(* (>= (read 1) 40) 255)" tests/data/RGB.byte.tif out.tif
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100508.23/warc/CC-MAIN-20231203125921-20231203155921-00857.warc.gz
|
CC-MAIN-2023-50
| 2,702
| 37
|
https://learn-carefirst.hellofurther.com/Employers/About_Accounts/DCAP
|
code
|
DCAP Last updated Save as PDF This content in this section is meant to help employers learn about our DCAP plans and includes helpful forms and resources necessary to support groups through onboarding, enrollment, and beyond. PagesHelpful Downloads and Links DCAP Member materials - For even more information about our DCAPs. DCAP Claim Form - This form can be used as documentation when seeking a reimbursement claim. Frequently Asked Questions What dependent care expenses are eligible for reimbursement from the DCAP account? What types of providers are eligible for the DCAP account? What expenses are not eligible to be reimbursed from the DCAP account?
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506477.26/warc/CC-MAIN-20200401223807-20200402013807-00156.warc.gz
|
CC-MAIN-2020-16
| 658
| 1
|
https://www.rumos.pt/oferta/html5-application-development-fundamentals-40375-porto-laboral-20210809/
|
code
|
This course leverages the same content as found in the Microsoft Official Academic Course (MOAC) for this exam.
The Microsoft Technology Associate (MTA) is Microsoft’s newest suite of technology certification exams that validate fundamental knowledge needed to begin building a career using Microsoft technologies. This program provides an appropriate entry point to a future career in technology and assumes some hands-on experience or training but does not assume on-the-job experience.
- Manage the Application Life Cycle
- Build the User Interface by Using HTML5
- Format the User Interface by Using CSS
- Managing the Application Life Cycle
- Building the User Interface by Using HTML5: Text, Graphics, and Media
- Building the User Interface by Using HTML5: Organization, Input, and Validation
- Understanding CSS Essentials: Content Flow, Positioning, and Styling
- Understanding CSS Essentials: Layouts
- Managing Text Flow by Using CSS
- Managing the Graphical Interface by Using CSS
- Creating Animations, Working with Graphics, and Accessing Data
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506420.84/warc/CC-MAIN-20230922134342-20230922164342-00069.warc.gz
|
CC-MAIN-2023-40
| 1,059
| 13
|
https://meta.serverfault.com/users/22979/michael-dillon
|
code
|
Apparently, this user prefers to keep an air of mystery about them.
Vancouver, British Columbia, Canada
Member for 10 years, 9 months
5 profile views
Last seen Nov 13 '12 at 5:27
- Stack Overflow 29.6k 29.6k 55 gold badges6262 silver badges9898 bronze badges
- Server Fault 1.7k 1.7k 1212 silver badges1515 bronze badges
- Super User 899 899 11 gold badge66 silver badges1111 bronze badges
- Unix & Linux 835 835 44 silver badges77 bronze badges
- Software Engineering 273 273 11 silver badge44 bronze badges
- View network profile
Top network posts
- 318 On EC2: sudo node command not found, but node without sudo is ok
- 183 How do I install PyCrypto on Windows?
- 148 Is there a way to inspect the current rpath on Linux?
- 67 Is there any benefit to using IPv6 on my home network?
- 64 Why use Celery instead of RabbitMQ?
- 46 Building Python and more on missing modules
- 40 Messaging Confusion: Pub/Sub vs Multicast vs Fan Out
- View more network posts →
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657139167.74/warc/CC-MAIN-20200712175843-20200712205843-00435.warc.gz
|
CC-MAIN-2020-29
| 962
| 20
|
http://bdr-project.org/docs/stable/quickstart.html
|
code
|
This section gives a quick introduction to BDR, including setting up a sample BDR installation and a few simple examples to try.
These instructions are not suitable for a production install, as they neglect security considerations, proper system administration procedure, etc. The instructions also assume everything is all on one host so all the pg_hba.conf examples etc show localhost. If you're trying to set up a production BDR install, read the rest of the BDR manual, starting with Installation and Node management functions.
Note: BDR uses libpq connection strings throughout. The term "DSN" (for "data source name") refers to a libpq connection string.
For this Quick Start example, we are setting up a two node cluster with two PostgreSQL instances on the same server. We are using the terms node and instance interchangeably because there's one node per PostgreSQL instance in this case, and in most typical BDR setups.
To try out BDR you'll need to install the BDR extension and the modified PostgreSQL release that it requires to run. Then it's necessary to initdb new database install(s), edit their configuration files to load BDR, and start them up.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358685.55/warc/CC-MAIN-20211129014336-20211129044336-00502.warc.gz
|
CC-MAIN-2021-49
| 1,164
| 5
|
https://www.dekazeta.net/foro/files/file/1599-pkhex/
|
code
|
Pokémon core series save editor, programmed in C#, for Switch, Nintendo 3DS and GameCube.
Supports the following files:
- Save files ("main", *.sav, *.dsv, *.dat, *.gci)
- GameCube Memory Card files (.raw, .bin) containing GC Pokémon savegames.
- Individual Pokémon entity files (.pk*)
- Mystery Gift files (.pgt, .pcd, .pgf, .wc*) including conversion to .pk*
- Importing teams from Decrypted 3DS Battle Videos
- Transferring from one generation to another, converting formats along the way.
Data is displayed in a view which can be edited and saved. The interface can be translated with resource/external text files so that different languages can be supported.
Pokémon Showdown sets and QR codes can be imported/exported to assist in sharing.
Nintendo 3DS savedata containers use an AES MAC that cannot be emulated without the 3DS's keys, thus a resigning service is required (svdt, save_manager, JKSM, or SaveDataFiler).
We do not support or condone cheating at the expense of others. Do not use significantly hacked Pokémon in battle or in trades with those who are unaware hacked Pokémon are in use.
Que novedades incluye la versión 26.01.20
- Added: Form Argument legality checks. Alcremie, Runerigus, Yamask, Hoopa, and Furfrou. Thanks @CanoeHope!
- Added: More static encounter locations.
- Fixed: Footprint ribbon is now checked for Gen8.
- Fixed: Slowpoke-1 Hidden Ability is now banned, and bred Mimikyu now allows Hidden Ability.
- Changed: A little bit of the program's internal structures have been tweaked for performance.
- Added: Gen8 Block Research/Export/Import tool, with direct block edits.
- Can swap in a full Fashion block, for example. Or, edit your title screen to show 6 Magikarp!
- Edit things directly! Known block objects can be selected, and all exposed Properties can be changed.
- Can compare two saves to see what blocks/values changed.
- Added: Gen5 Subway score editing. Thanks @egzn!
- Added: More event flag/const have been documented. Thanks @FeralFalcon & @asterysx!
- Fixed: Internal API changes for more Thread safety. (People reuse PKHeX.Core in multithreaded applications, and the Rand utility didn't work correctly).
- Fixed: German translation no longer misbehaves for certain ribbons.
- Fixed: Handling for Form Arguments is now performed correctly. Will no longer clear for Runerigus on edit.
- Fixed: Gen7 LGPE Dumping of Go Park Entities with invalid file names are now sanitized before saving. Thanks @xJam-es!
- ixed: Gen4 HGSS Pokéwalker course unlock cheat now works as intended.
- Changed: Gen8 SWSH Block reading/writing is now much more efficient.
- Changed: Gen7 LGPE Awakening Values are now applied more liberally via Control-click Random. Only an attack IV of 0 will not add AVs. Thanks slp32!
- Changed: Spanish Translation updated. Thanks @egzn!
- Updated: Banlist now checks for unavailable forms and unavailable hidden abilities
- Changed: Another round of legality check updates. Thanks @iiippppk, @BetaLeaf, @crzyc, @Bappsack & @ReignOfComputer
- Changed: Rewrote EvolutionTree and MemoryVerifier to better handle the new rules that were introduced in Gen8.
- Added: Gen6 In-game trades are now checked for their Memory values.
- Added: $suggest for Ball, sets a legal ball, with preference for color matching.
- Added: $shiny0 for square shinies.
- Added: $suggestAll for all TR moves
- Added: $suggest for all legal Ribbons, and $suggestNone to remove all but required ribbons.
- Changed: Gen7 LGP/E now uses the large box sprites. Thanks @sora10pls!
- Added: Alcremie can now specify the topping type (next to form).
- Added: Click the Nature/StatNature labels to copy the other's value.
- Added: Gen8 trainer card's trainer number can now be edited via the Trainer Editor.
- Fixed: Gen5 CGear Background import from file now works. Thanks @CyraFen!
- Fixed: Gen3 Blank Saves now behave correctly when setting a slot.
- Fixed: VC origin sprite (GameBoy) now displays properly.
- Unban scrafty / scraggy HA
- WasLink comparison fix
- Updates for invalid genNumber
- Fixed flag error in Parsed
- Added 2 additional static encapsulation slots
- Minor updates
- Fixed the anchoring of the textbox
- Only recording flags are allowed for Move are enabled.
Introducing Sword/Shield support! Thanks @SciresM and @sora10pls for troubleshooting prior to release!
- Initial Legality Checking is provided. Please refer to the forums when reporting legality issues for Generation 8 parsing.
- Bag editing, Pokédex, and Trainer Info editing is provided.
- Changed: PKHeX.Core.dll is now merged in with the main executable.
- Changed: PKHeX.WinForms spriting has now been split into a separate project. On build, it is merged into the main executable.
- Changed: .NET Core 3 support added for WinForms builds. .NET Framework 4.6 build is still the main build option.
- Changed: Project internals now use C# language version 8, the latest. Nullable compiler checks enabled for PKHeX.Core.
- Removed: Mono build no longer required due to font loading rework. No platform specific code remains!
- Changed: Slot grids are now generated instead of manually created. Party and Battle Box now appear differently.
- Changed: Encounter Slot generators now use game-specific logic to yield slots.
- Fixed: Gen6 Fashion for females now exposes the remaining fields.
- Fixed: Legality parsing for misc things fixed. Thanks @Rayqo, @steph9009, @iiippppk!
- Fixed: Mystery Gift received flags are now set correctly. Thanks tsubasa830!
- Fixed: Loading box data binaries now applies it to the current box. Thanks @PKMWM1!
- Fixed: Gen4 Poketch now behaves correctly in the editor, no longer deleting itself.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151699.95/warc/CC-MAIN-20210725143345-20210725173345-00140.warc.gz
|
CC-MAIN-2021-31
| 5,658
| 70
|
https://www.groupon.co.za/deals/jimmy-s-killer-prawns
|
code
|
Further Information: No takeaways allowed. Cannot be used in conjunction with any other special offers instore. No substitutes. Platter cannot be shared for more then what is stipulated on the voucher. Groupon does not cover additional tax, service charge or gratuity. Picture displayed is only a representation. Subject to availability.Merchant is solely responsible to purchasers for the care and quality of the advertised Goods and services.
Jimmy’s Killer Prawns in Durban, as the name suggests, places a large focus on seafood and prawns in particular. On the menu, there are also meat and poultry items along with a large variety of drinks and desserts.
Choose between these options:
R225 for a Sumptuous Killer Seafood Platter for Two - Gateway (R225 value)
R415 for a Sumptuous Killer Seafood Platter for Four - Gateway (R415 value)
R623 for a Sumptuous Killer Seafood Platter for Six - Gateway (R623 value)
R225 for a Sumptuous Killer Seafood Platter for Two - Westwood Mall (R225 value)
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612013.52/warc/CC-MAIN-20170529034049-20170529054049-00045.warc.gz
|
CC-MAIN-2017-22
| 998
| 7
|
https://oppositelock.kinja.com/i-went-on-513904364
|
code
|
...the internet, and I found THIS! Lots and lots of awesome wallpapers. Look inside the post for more.
My personal favorite:
The next few are smaller but I sometimes use those as well. Just set the wallpaper for Center and choose a nice color for the rest of the screen.
I completely forgot about that website. And man, have they improved the quality of the photos. It's got brightness and exposition and angles and everything!
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00097.warc.gz
|
CC-MAIN-2019-43
| 427
| 4
|
http://xvisionx.com/compile-error/compile-error-object-required.html
|
code
|
Top Problem: divide by zero Often due to missing data or uninitialized variables. Wright Top Abstract Descriptions of the more commonly encountered SPICE problems, broken down into functional areas with suggestions on how to avoid problems. Confirm the model you expect is provided by the PCK file you're using. Has anyone ever actually seen this Daniel Biss paper? http://xvisionx.com/compile-error/compile-error-object-required-vba-excel.html
SPICE Data Accuracy The SPICE ancillary information system might be thought of as a collection of flexible "data buckets" accompanied by means to place data into those buckets and means to Can taking a few months off for personal development make it harder to re-enter the workforce? If you think the problem is related to a SPICE routine, look at the reference documents related to the routine—these are listed in the required reading section of the routine's header. See Error Required Reading, error.req, or the header of trcoff_c for further information. https://lists.freedesktop.org/archives/spice-devel/2010-February/000095.html
Top Problem: Can't determine what states are computable from SPK files Good question. After this just proceed to the compilation of qemu enabling spice, passing --enable-spice to configure: ./configure --enable-spice # make & make install Hope this helps. See Kernel Required Reading, kernel.req, for further information on the NAIF text kernel format. Since I have not found > a windows port of berkley spice, at least not one which can be ran from > a command line without a fancy gui, I would
If the file was transferred between two systems with incompatible binary file formats, for example an HP workstation and a PC, the problem is that binary kernels on one system are Understand the definitions: many geometric quantities have a variety of definitions. You should then see a the black&white DOS shell window. Bash scripting - how to concatenate the following strings?
The code for several very simple SPICE-based programs is provided in a set of so-called "cookbook" programs found in each copy of the Toolkit. Compile Error Object Required Access You can check "msc.out" for error messages. Top Problem: Earth orientation given by a text PCK is too inaccurate The CSPICE PCK system supports binary PCK files that are capable of supporting high-accuracy rotation models. http://www.rayslogic.com/Software/Spice/compile_berkeley_spice_3f5.htm Also, it is possible the disk space was exhausted on the target system during the transfer.
You need to comment out the error messages like this: #ifndef CONFIGURED//error error error error //Operating system type unknown //error error error error #endif The next problem deals with Top Problem: SPICE code is not thread safe. Varying the order in which the files were loaded can affect the state vectors returned by the SPK system. I am the > > developer of a loudspeaker design program called GSpeakers > > (http://gspeakers.sf.net) which I would like to port to windows but > > currently it requires a
D. https://sourceforge.net/p/gspiceui/bugs/15/ The topics covered are time conversions (tictoc), reading a trajectory file (states), computing the angular separation of two objects as seen from a third (simple), and computing a spacecraft's sub-observer point Compile Error Object Required Unfortunately (atleast on 12.10) it is not available in the main repositories, so you will have to download and compile it: wget http://spice-space.org/download/releases/spice-protocol-0.12.3.tar.bz2 tar -xjf spice-protocol-0.12.3.tar.bz2 cd spice-protocol* ./configure make sudo Compile Error Object Required Excel Macro See the installation instructions for details.
Normally, kernel files should be loaded once per program run, usually during initialization. navigate here Regards Daniel Sundberg > Regards > Daniel Sundberg > > --- > http://sumpan.com > > > > > ------------------------------------------------------- > This SF.Net email is sponsored by The 2004 JavaOne(SM) Conference > Hardware architecture (the CPU chip) determines the format of numeric binary data; the two formats used by computers supported by NAIF are called "big endian" and "little endian." Because kernels are It seems like some kind of issue mixing glibc and kernel headers. -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Alexander Larsson Red Hat, Inc alexl at redhat.com alexander.larsson at gmail.com He's a short-sighted shark-wrestling vampire hunter Compile Error Object Required Error In Vba
There are often many of these kernels. For some orbits, some elements are not easily recovered from state vectors. Check these values are as expected. Check This Out Top E-kernel Top Problem: Query takes forever to complete For queries not involving the ordering of output, this sometimes happens because inadequate constraints were supplied.
Browse other questions tagged qemu or ask your own question. In general, if you have difficulty building the SPICE Toolkit, it may be a useful test to see whether you can build a simple ``hello world'' program in the same environment. For example you will find Cross product of two vectors Product of two vectors, cross Vectors, cross product of two all listed in the permuted index.
Top Problem: Arithmetic on time values yields incorrect results Within the SPICE system TDB times are represented as double precision numbers, and these are not generally accurate to better than 1.E-7 In the discussion below, it's implicit that any of the problem areas listed above should be examined whenever you diagnose a failure. Top Problem: UTC-TDB conversion in SPICE does not appear accurate This is not truly a common problem; it has arisen only in the context of radio science applications. SPICE Toolkit software undergoes considerable review and testing; the documentation for the SPICE software is extensive and (usually) accurate.
Here's a checklist of things to get right before embarking on solving a problem with SPICE, or comparing SPICE results with those obtained from alternate sources. 1. Stay Connected If you wish to keep well informed about SPICE related news, consider signing up with the spice_announce Mailman system. Home Compile error in hidden module: SolverCode. Understand the expected accuracy and precision: for example, the Astronomical Almanac frequently presents results having claimed accuracy of 0.01 degree. this contact form These documents may contain useful information for diagnosing and correcting the problem.
See below. SPACIT will tell you what instrument the pointing data is for, which base frame the pointing is referenced to, whether angular velocity data are also present in the segment, and the Any open attempt will fail if the application attempting the operation does not have permission to access the file. These cannot be expected to agree with SPICE results at the arcsecond level.
No mechanisms to ensure thread safe behavior exist in standard ANSI C or FORTRAN 77. But, as the use of SPICE spreads, other agencies may also offer generic kernels. What Else is Needed Use of the SPICE system requires (at a minimum) one of the following: a FORTRAN 77 compiler/linker, an ANSI C compiler/linker, an IDL installation, or a MATLAB You signed out in another tab or window.
Included is the patch, apply to CVS tree and run autogen.sh afterwards. SPICE does not currently contain routines that provide a convenient answer. I don't need the latest version, just anything that > works would be nice. > When I try to compile ng-spice-reworked on my WinXP/mingw/msys system I get the following error message: We need something that will cover historical data and off-site storage. © Copyright 2006-2016 Spiceworks Inc.
BillMills closed this Mar 13, 2014 Sign up for free to join this conversation on GitHub. The permuted index is found in the /doc/html/info folder and is named spicelib.idx in FORTRAN toolkits and cspice_idx in CSPICE, Icy and Mice Toolkits. SPICE Kernels SPICE stores data in SPICE Toolkits since version N57 include an error check to text file readers to ensure the files had the correct line terminators for the platform. Therefore, state vectors of bodies relative to the solar system barycenter cannot be expected to compare well across planetary SPK files based on different integrations (having different underlying planetary ephemerides).
I had a difficult time getting Spice to compile with newer Microsoft compilers. SPICE will then attempt to read from whatever file the application has connected to the logical units SPICE had allocated. Most of this document is concerned with matching symptoms to possible causes and solutions.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945082.84/warc/CC-MAIN-20180421071203-20180421091203-00095.warc.gz
|
CC-MAIN-2018-17
| 8,681
| 17
|
https://jwjudge.com/word-count-chronicles-day-50-pulling-my-hair-out/
|
code
|
12/24: I missed posting yesterday, for the time in 50 days. I even thought about back-dating this post so that it would appear that the streak had continued, but that felt disingenuous. So here we are.
While I am (impatiently) waiting on beta readers to get back to me about Vulcan Rising, I am having a difficult time deciding what to pursue next.
Yesterday, I spent several hours plotting out a historical fiction novel that I have an on-again-off-again relationship, and which I’ve written about before here: Traveling to Add Depth to Your Fiction Writing and Researching Weather to Write Better Fiction.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819847.83/warc/CC-MAIN-20240424174709-20240424204709-00454.warc.gz
|
CC-MAIN-2024-18
| 609
| 3
|
https://forum.bubble.io/t/document-management-system/91509
|
code
|
I need to build a document management system and I am trying to implement it in bubble.
The idea is that there are three types of users: Administrators, editors and readers. Editors and administrators should be able to upload files (mainly documents in the form of PDF and XML). Readers should be able to see a filesystem (or similar, at least a system with folders of sorts to organize the documents) and then open the documents within the app. The documents should open in (internal) tabs, so users can quickly move between the documents. A key function is to provide document management control, meaning that editors and administrators can keep a record of document revisions and control and oversight over who edited documents and when. The whole purpose is to ensure that readers always are supplied with the latest version of documents. Ideally the app should be designed in a way so it can be transformed to a native android app later on (phonegap?) as well, to allow for offline document reading.
Would that be possible with bubble? Any help appreciated!
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475711.57/warc/CC-MAIN-20240301225031-20240302015031-00221.warc.gz
|
CC-MAIN-2024-10
| 1,062
| 3
|
https://mail.haskell.org/pipermail/web-devel/2011/001765.html
|
code
|
[web-devel] Content-Length on sendFile
michael at snoyman.com
Wed Jun 15 04:29:52 CEST 2011
Let me point out one other distinction: sendFile versus yesod-static.
The former is a function you would call from a normal handler, while
yesod-static is the "magical" package which would actually know all of
the stuff about your files at compile time. For the sendFile case, the
only options for getting the file size are (1) the programmer manually
adding the header and (2) Yesod automatically doing a system call to
As for the behavior of Warp... while I agree Kazu that we should
reduce system call overhead, it might make sense for Warp to perform
the system call to get file size *if* no content-length header is
And I'm still very uncomfortable setting the content-length header for
static files based on compile-time information. In this case, having
the wrong value (e.g., someone modified a CSS file after compiling)
will completely break things and corrupt an open HTTP connection.
On Wed, Jun 15, 2011 at 5:22 AM, Kazu Yamamoto <kazu at iij.ad.jp> wrote:
>> I apologize for the confusing terminology. I am not differentiating between
>> sending a static file with sendfile and a streaming response. I
>> am differentiating between 2 different use cases for sending static files
>> (with sendfile). For all of my web applications, I know what all the static
>> files are and they will never change until I deploy another web application.
>> That means I can stat the files once when the application is deployed and keep
>> that information in memory. So I already have the file length information to
>> include in the header, even though I don't do a file stat when the file is
>> requested. wai-app-static and yesod-static supports these techniques.
> Thanks. I think I understand. :)
> So, do you support to *not* change the API (apps should add CL: by
> web-devel mailing list
> web-devel at haskell.org
More information about the web-devel
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948580416.55/warc/CC-MAIN-20171215231248-20171216013248-00047.warc.gz
|
CC-MAIN-2017-51
| 1,948
| 31
|
https://pingperfect.com/index.php/knowledgebase/593/Avorion--Server-Configuration.html
|
code
|
It's easy to configure your Pingperfect Avorion Server. Just follow the steps below.
- Open the 'Configuration Files' section from your control panel.
- Select the 'Text Editor' option next to 'Saves\avorion_galaxy\server.ini'
- Refer to the Example Configuration below and change the respective settings where you need to do so in order to configure your server to your desires.
Seed=EiqkY5oYjd //The random seed used for galaxy generation. Accepts upper and lower case letters and numbers.
Difficulty=-1 //Note the typo. The difficulty of the server. Accepts an integer between -3 and 3.
HardcoreEnabled=false //Toggles hardcore mode server wide
InfiniteResources=false //Toggles infinite resources (or "creative mode") server wide
CollisionDamage=1 //A multiplier for damage to colliding objects. Accepts floating-point numbers, e.g. 0.5 is 50% collision damage.
SafePlayerInput=false //Disabling this will result in much smoother performance at the time of this writing. Enabling this may result in very bad performance over slow networks.
PlayerToPlayerDamage=true //Enables/Disables player to player damage server wide.
LogoutInvincibility=true //A player's ships are indestructible as long as the player is offline.
LogoutInvincibilityDelay=30 //The time in seconds that a player must be offline until his ships become invincible.
DevMode=false //Enables/Disables devmode
ExplicitCallables=true //Toggles explicit callables
BigWreckageDespawnTime=1800 //Time in seconds it takes for new (as in: not created by the generator but during gameplay, such as combat) large wreckages (more than 15 blocks) to disappear.
SmallWreckageDespawnTime=900 //Time in seconds it takes for new (as in: not created by the generator but during gameplay, such as combat) small wreckages (15 blocks or less) to disappear.
LootDiminishingFactor=0.00499999989 //Multiplier that's applied to the value of a block/wreckage/ship to determine the dropped money and resources.
ResourceDropChance=0.400000006 //Chance of resources dropping from destroyed blocks
TurretDropChanceFromTurret=0.0250000004 //The chance that a turret will drop from an NPC space craft when the turret is destroyed
TurretDropChanceFromCraft=0.25 //The chance that a turret will drop from an NPC space craft when the craft is destroyed
TurretDropChanceFromBlock=0.00499999989 //The chance that a ship system will drop from a block of wreckage when it is destroyed
SystemDropChanceFromCraft=0.200000003 //The chance that a ship system will drop from an NPC space craft when the craft is destroyed
SystemDropChanceFromBlock=0.00499999989 //The chance that a ship system will drop from a block of wreckage when it is destroyed
ColorDropChanceFromCraft=0.0500000007 //The chance that a color will drop from a space craft when the craft is destroyed
ColorDropChanceFromBlock=0.00249999994 //The chance that a color will drop from a block of wreckage when it is destroyed
MaximumFightersPerSectorAndPlayer=-1 //The total number of fighters that can be in one sector at once, defaults to -1, meaning infinite.
MaximumBlocksPerCraft=-1 //The total number of blocks that any ship can be made up of, defaults to -1, meaning infinite
MaximumVolumePerShip=-1 //The total volume that ships can reach, defaults to -1, meaning infinite (Needs testing if it applies to AI)
MaximumVolumePerStation=-1 //The total volume that stations can reach, defaults to -1, meaning infinite (Need testing if it applies to AI)
MaximumPlayerShips=-1 //The total number of ships a player may own at any one time, defaults to -1, meaning infinite
MaximumPlayerStations=-1 //The total number of stations a player may own at any one time, defaults to -1, meaning infinite
MaximumBlocksPerTurret=250 //The total number of blocks per turret
PlayerInventorySlots=1000 //The total number of player inventory slots
AllianceInventorySlots=1000 //The total number of a single Alliance's inventory slots
Version=0.30.2 //Server Version
sameStartSector=true //Indicates if all players should start in the same sector. If false, a random empty sector on the outer rim is populated and used as the home sector for each new player.
startUpScript=data/scripts/server/server.lua //Specifies a Lua script to run on server startup.
startSectorScript=startsector //Specifies a Lua script to run when generating a start sector for a player.
saveInterval=600 //The time between server saves, in seconds.
sectorUpdateTimeLimit=300 //The time that sectors which don't qualify for out-of-sector-simulation are kept within memory.
emptySectorUpdateInterval=0.5 //The time between update steps of sectors without players.
workerThreads=8 //Number of concurrent threads that are used to update sectors. (Identical to the "Threads" setting ingame.)
generatorThreads=2 //Number of concurrent threads that are used to generate new sectors while players are calculating nav routes.
scriptBackgroundThreads=2 //Number of concurrent threads that are used to run heavy script calculations that are called during gameplay, an example would be the generation of new ship models
aliveSectorsPerPlayer=5 //Number of sectors kept alive for each player and alliance on the server, provided that there are player or alliance ships in that sector.
weakUpdate=true //Indicates if the sectors without players should be simulated with a "weak" update, which is less accurate but a lot faster than the normal update step.
profiling=false //Toggles performance and memory profiling. Server performance may suffer slightly, but /status command will print a lot more detailed output.
sendCrashReports=true //Toggle whether crash reports are sent
hangDetection=true //Toggle whether hang detection is on or off (hang detection identifies whether the server has crashed)
sendSectorDelay=2 //Delay from sending sector
placeInShipOnDeathDelay=7 //Delay when player dies and should is to be placed in ship
port=27000 //Do not change, will stop your server from operating correctly
broadcastInterval=5 //The time between server mass update broadcasts in seconds.
isPublic=true //Privacy setting. If enabled, only one administrator is allowed on the server and the server will not show up on the LAN menu. (Same as the command line parameter -public)
isListed=true //Privacy setting. If enabled together with useSteam, the server will show up in public server lists. (Same as the ingame setting "List Publicly")
isAuthenticated=true //Privacy setting. Toggles Steam user authentication. (Identical to the ingame setting "Authenticate Users")
sendStatsToAdmins=true //Sends stats to the adminstrator if true
useSteam=true //Determines whether the server is using Steam networking and can be joined via Steam, using options like "join game".
rconIp=126.96.36.199 //Do not change, will stop your server from operating correctly
rconPassword=ki27c //The password that is needed to connect to the RCON server. If blank, RCON is disabled.
rconPort=35003 //Do not change, will stop your server from operating correctly
maxPlayers=1 //The max number of players allowed on the server at one time, if you change this from the amount of player slots you have purchased, the Gamepanel will identify this and stop your server
name=Avorion Server //The name of the server, shown in the server list.
description=Welcome to another Pingperfect.com Gameserver //A description for the server, shown in the server list.
password= //Sets the password users attempting to join the server need to enter, if they enter incorrectly they will be kicked
pausable=false //Enables/Disables pausing the server
accessListMode=Blacklist //Determines whether the server uses a blacklist or a whitelist to restrict access.
Looking for a game server host known for brilliant 24/7 customer support and quality hardware?
Try a Pingperfect Avorion server today! https://pingperfect.com/gameservers/avorion-game-server-hosting-rental.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510888.64/warc/CC-MAIN-20231001105617-20231001135617-00189.warc.gz
|
CC-MAIN-2023-40
| 7,863
| 70
|
https://www.experts-exchange.com/questions/10327948/rcp-permission.html
|
code
|
I have two solaris machines A, and, B, both have the account "accnt" established.
The .rhosts at A:~accnt has the following line:
The .rhosts at B:~accnt has the following line:
I am able to do the following on A
rcp test.txt B:~accnt
But if I try the corresponding one on B
rcp test.txt A:~accnt
I got "permission denied"
Any idea where to look for the configuration problem ?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948426.82/warc/CC-MAIN-20180426164149-20180426184149-00289.warc.gz
|
CC-MAIN-2018-17
| 377
| 9
|
https://search.datacite.org/works/10.6084/m9.figshare.21434015.v1
|
code
|
Additional file 2 of MED12 mutation as a potential predictive biomarker for immune checkpoint inhibitors in pan-cancerYong Zhou, Yuan Tan, Qin Zhang, Qianqian Duan & Jun Chen
Additional file 2: Fig. S2. The pan-cancer landscape of MED12 mutations across human tumors. The proportion of MED12 mutated tumors identified for each cancer type with alteration frequency in TCGA pan-cancer cohorts.
This data repository is not currently reporting usage information. For information on how your repository can submit usage information, please see our documentation.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647525.11/warc/CC-MAIN-20230601010402-20230601040402-00769.warc.gz
|
CC-MAIN-2023-23
| 558
| 3
|
https://www.w3.org/Bugs/Public/show_bug.cgi?id=19788
|
code
|
In the current draft, the Encrypted Block Encountered algorithm may fire a needkey event if the required key is not available. For ISO BMFF/CENC^ and WebM, the needed keys can always be identified based on headers, so no new information should be available during the decrypt/decode phase that wasn't available when the user agent determined that the stream may be encrypted (Potentially Encrypted Stream Encountered algorithm ) and sent a needkey event. Thus, multiple identical events may be sent in certain valid scenarios. This issue is tracking whether we should have a different behavior in the case that a required key is unavailable. (Note that this is not a MediaError because there are legitimate use cases where a key may have been requested as a result of the Potentially Encrypted Stream Encountered event but has not yet been received by the time the key is needed.)
* Report a "needkey" event on the media element.
* Report a different event on the media element.
* Report an event on the MediaKeySession element if one has been associated with the media element.
* Report no event and assume the event from the Potentially Encrypted Stream Encountered algorithm is sufficient.
- This works for ISO BMFF/CENC and WebM and other container formats with “headers” but would not work for a container where key IDs can appear in blocks but not identified based on “headers”. One option in those cases would be to consider this the “may contain” case, which should then be renamed to reference the first time a key reference is encountered. (More on this below.)
If we do send events in this case, how many events should be sent for the same key ID? Since audio and video (or multiple streams of each) may be processed at different times or on different threads, it's possible that an event could be fired for each even if they use the same key ID. Should we allow, prevent, or discourage this? (There is a similar issue for the Potentially Encrypted Stream Encountered algorithm when multiple files/streams/tracks contain the same Initialization Data.)
One option might be to change how we think about the two scenarios. This might make implementations more difficult but would resolve some of these issues.
* The first algorithm would be “First Time a Key Reference is Encountered.” Each container would need to specify what this means. For example, ISO BMFF/CENC might define this as encountering a PSSH even if the PSSH does not explicitly reference a key. For WebM, this might be when ContentEncryption/ContentEncKeyID is parsed. For a container without such headers, it might be the first time each key ID is encountered (i.e. in a block).
* The second algorithm would continue to be "Encrypted Block Encountered" with the change that the Key Presence step does not fire an event or an error in the case where the needed key is not available. Note that this may or may not occur at the same time as the first reference to a specific key (the first algorithm).
The first algorithm would be the only one that sends an event, and the second one would describe the behavior of playback (see bug 18515). Applications would not be informed that a key is needed for decrypting a current block. They shouldn’t really need to know for key-related reasons, but are there other reasons? Would/should existing events (i.e. stalled?) cover any such needs?
Follow-up: Should the event for the Encrypted Block Encountered algorithm contain Initialization Data or the key ID?
If we choose to report an event, we need to decide what data to report in the event.
Step 7 of says to fire a needkey event where "initData = block initData". "block initData" was set in step 4, which says, "If the block (or its parent entity) has Initialization Data, let block initData be that initialization data."
The problem is that Initialization Data may not be readily available when decrypting. Instead, the key ID is generally what is known for a given block.
Which of the following should we specify?
1) The needkey event contain the Initialization Data, which can be sent to the server just like it can for the Potentially Encrypted Stream Encountered algorithm . This has implementation overhead.
2) The key ID of the current block. This is easier to implement but inconsistent with the Potentially Encrypted Stream Encountered algorithm and may not be useful for obtaining a key. This option is probably better if we have a separate even name and/or fire it at different objects.
[These footnotes apply to all three updates through this one.]
^ Is this true for CENC, even in use cases case that involve key rotation?
(In reply to comment #0)
> For ISO BMFF/CENC^ and
> WebM, the needed keys can always be identified based on headers,
For CENC 2012, there's no specified coherence between PSSH boxes and tenc/senc KIDs. It's certainly expected that the PSSH boxes contain the information necessary to obtain the keys, but there is no guidance or guarantee offered anywhere to this effect.
(The 2011 drafts of CENC extend 14496-12:2008, so this problem was much less severe. There's still quite a bit of straw-man mischief one could get up to, but there weren't that many legitimate reasons to do something twisted. 14496-12:2012 allows sample groups in fragments, which blows the barn doors off.)
> 2) The key ID of the current block. This is easier to implement but
> inconsistent with the Potentially Encrypted Stream Encountered algorithm
> and may not be useful for obtaining a key. This option is probably
> better if we have a separate even name and/or fire it at different objects.
With my author hat on, I favor this. It solves at least one problem we have already had to hack around (keeping content permissions and PSSH atoms in sync, with code to defend against drift present at every level) by allowing the client or server to synthesize missing boxes on demand.
I also favor it from a spec point of view. Because of CENC's choice in making PSSHs completely devoid of specified spatial/temporal correspondence with KIDs, there really are two separate kinds of data at the format level. IMO, either we should marshal these two and essentially amend the CENC spec via the format-specific guidelines to require such a correspondence, or we should be honest about the underlying discord and inform the client explicitly.
There are two subcases:
a) The player encounters some new information in the stream that indicates that a previously unseen KeyId is needed
b) The player encounters some media encrypted using a key it does not have, but for which it already has initData
For (b), I think the CDM should just send a keymessage.
For (a), we need to understand how we want to handle this 'new information', which we could call new initData. I see two options:
(a)(i) assume the CDM just handles it internally, putting it into case (b)
(a)(ii) require it to be sent up to application, like the original initData
This 'subsequent initData' differs from the initial initData because the keysystem has already been selected and it's possibly more embedded in the stream (rather than being in some kind of initialization segment). So (a)(i) should be possible and makes things rather simple.
On the other hand, for initial initData we have the possibility for the application to process or even construct this. Do we want that possibility for subsequent initData as well ?
If yes, then the next question is whether this should be dealt with inside the existing MediaKeySession or whether another one should be constructed, or whether this should be up to the CDM.
(a)(ii)(1) [same session] We need to fire a needkey-like event on the same session and have a new method on the session to add initData
(a)(ii)(2) [different session] We fire a needkey event and the app creates a new session
(a)(ii)(3) [CDM decides] Both of the above are supported
I think I have largely just enumerated the options in the comments above. My preference for (a) is to support (a)(i) and (a)(ii)(2).
(In reply to comment #4)
> There are two subcases:
> a) The player encounters some new information in the stream that indicates
> that a previously unseen KeyId is needed
> b) The player encounters some media encrypted using a key it does not have,
> but for which it already has initData
I think (b) is difficult to determine. For example, nothing guarantees that all key IDs are specified in the PSSH. I also think it is an (admittedly minor) implementation burden to have to check whether you have seen a PSSH (or equivalent structure) for a key ID. Theoretically, CDMs may not keep the PSSH around or even know how to parse all of it.
> For (b), I think the CDM should just send a keymessage.
What type of keymessage? Would it be key system-specific.
> For (a), we need to understand how we want to handle this 'new information',
> which we could call new initData. I see two options:
Is the 'new information' format a key ID or the same Initialization Data (initData) format used elsewhere? I think Initialization Data should always be the same format for a given container. Thus, if it's a key ID, we should call it something else.
> (a)(i) assume the CDM just handles it internally, putting it into case (b)
What does it mean to be put into case (b)?
> (a)(ii) require it to be sent up to application, like the original initData
> This 'subsequent initData' differs from the initial initData because the
> keysystem has already been selected and it's possibly more embedded in the
> stream (rather than being in some kind of initialization segment). So (a)(i)
> should be possible and makes things rather simple.
> On the other hand, for initial initData we have the possibility for the
> application to process or even construct this. Do we want that possibility
> for subsequent initData as well ?
If this data is defined as a key ID, then an application can construct it, though I'm not sure what the use case is since createSession() takes a different type of data. I think it's more likely that it would be sent to the server to get a new key for the ID.
(In reply to comment #2)
> Step 7 of says to fire a needkey event where "initData = block
> initData". "block initData" was set in step 4, which says, "If the block (or
> its parent entity) has Initialization Data, let block initData be that
> initialization data."
> The problem is that Initialization Data may not be readily available when
> decrypting. Instead, the key ID is generally what is known for a given block.
Bug 20552 has been filed to fix this text.
We should definitely not send different types of data to the same event. That means we need a new event if we are going to send the key ID.
Unless we guarantee that all keys can be determined from the Initialization Data (CENC does not), we can't guarantee that the user agent knows which, if any, MediaKeySession to fire the event at. The event would either need to be fired at the HTMLMediaElement or the MediaKeys object.
Since CreateSession() takes a specific Initialization Data format, any other data format (i.e. key id) provided in the new event could not be used to create a new session. It could only be used to tell the application/server that the key has not been provided. The reply would need to be a new license (for an existing session) containing that key.
Unless there is a good use case, I propose that we go with no event. An event can always be added later if we find that it would be useful (it is easier to add something than remove it), but currently we don't know what the event should include or what it should be fired at.
Assuming this decision, the change in Comment 1 probably also makes sense.
Note also that this is an abnormal condition, not something an application should expect during normal playback. In most instances, the user will experience some type of pause or skip in playback if this occurs.
Notes from the March 12th telecon (http://www.w3.org/2013/03/12-html-media-minutes.html#item05):
Currently, there is a needkey event for hitting initdata
and another needkey if you need a key to decrypt the current frame
. For the second one, there is not much the app can do - it is really an error condition
We assume that if the app gets to this point with no key, the app has received an earlier event telling it that a key is needed
and we assume the app is already working on acquiring it.
We decided to delete the firing of a needkey event on encrypted block encountered with no key and merge in comment 1
I am having a little trouble following the recommendation at this point. Please bear with me.
If the app determines that another key will be needed at some point in the future (for example from information in the manifest) is it allowed to start a second session to kick off the acquisition of that key? Or is it required to wait for the "Encrypted Block Encountered" algorithm to kick in?
If the latter -- that is a problem when key acquisition takes any amount of time.
(In reply to comment #11)
> If the app determines that another key will be needed at some point in the
> future (for example from information in the manifest) is it allowed to start
> a second session to kick off the acquisition of that key? Or is it required
> to wait for the "Encrypted Block Encountered" algorithm to kick in?
Yes, we discussed the necessity for multiple sessions. If you have initdata you should be able to call createSession with it. You definitely don't want to wait for playback to stall before performing the license acquisition.
I updated the text per comment 1 and comment 10 in https://dvcs.w3.org/hg/html-media/rev/9dedfcd2e3a3
* Removed the second needkey event.
* Changed the name of 5.2
* Did not remove the reporting of MEDIA_ERR_ENCRYPTED. This will be addressed in bug 16857.
* Updated the 7.1 WebM section to specify when to call “First Time a Key Reference is Encountered.”
* 7.2 ISO Base Media File Format needs to be updated in bug 17673.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398454160.51/warc/CC-MAIN-20151124205414-00010-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 13,931
| 104
|
https://curl.se/mail/archive-2002-04/0053.html
|
code
|
--interface option doesn't work?
Date: Fri, 19 Apr 2002 01:17:06 +0200
Trying to use virtual hosts with curl:
Machine's IP is xxx.2.61.66 and has fifteen virtual hosts with "legal" IPs
from eth0:1 to eth0:15. So I try to fetch a page from a web server, like
curl --interface eth0:1 http://some.url
But in apache's logs, there's always "main" machine IP (eth0) logged. Even
tried with --interface SOME_LOCAL_IP ... but nothing.. All virtual hosts are
working normally, otherwise. What's wrong?
Received on 2002-04-19
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363215.8/warc/CC-MAIN-20211205160950-20211205190950-00485.warc.gz
|
CC-MAIN-2021-49
| 515
| 10
|
http://greyhead.co.uk/blog/2006/05/25/time-saving-and-more-time-saving
|
code
|
One thing that really bugged me about the way this site runs is that, whenever you use the "back" button in your browser, the page re-loads. This isn't a standard thing - if you go to, say BBC.co.uk and click a link at the bottom of the homepage, then use the back button to go back to the homepage, you'll still be at the bottom of the page and you won't have to wait for it to reload. On UPSU.net, every time you visit a page it re-loads; this is great for making sure you see the latest version of a page, e.g. the homepage or the forums, but it's terrible for our bandwidth and your page load times.
Because one of the main things I'm aiming for with UPSU.net is to make it as fast as possible to navigate around, I've decided to try turning off the code which tells your browser to reload the page each time you visit it.
Another thing we've added is the "sign in" box in the navigation bar of every page; now, if you need to sign in to the site, you can find a username and password box at the top of the page. Fill them in and click "sign in" and you're automagically signed into the site. At the moment, you're also returned to the page you were just browsing, but I think it will be better to send people to their profile homepage since this is where you can look after your account; I've been trying things out the former way for the past few days, and I'll be deciding later whether to try it using the latter method for a while to see if it's a better way of moving people around the site.
As always, I can't test each page on every type of browser, so your feedback is always welcome - just add your comments below as normal - if this post is greeted with the eternal sound of tumbleweed rolling across the comments box, I'll assume that everything's gone to plan and no-one's suddenly found UPSU.net isn't working for them anymore... ;o)
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119838.12/warc/CC-MAIN-20170423031159-00101-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,851
| 4
|
https://mono.github.io/mail-archives/mono-list/2001-August/001150.html
|
code
|
[Mono-list] Quick status update.
Miguel de Icaza
08 Aug 2001 04:13:22 -0400
This is a quick status update on the compiler end of things. Over
the past few days I have implemented the recursive class, interface
and structure definition that C# uses.
Currently the compiler can emit empty classes, structs and
interfaces as specified by the grammar and will catch a handful of
errors (errors in C# are probably the thing that makes the language so
Tonight in about one hour I added support for correctly locating
definitions in namespaces and support for using (the infrastructure
was there, but now it is being used as it was intended to be used).
I also speeded up the compiler a lot by not loading the type
information from the assembly into my own type system, but only
loading types on demand. System.Reflection is sweet. It will catch a
lot of incorrect code for you as well.
As you remember, Dietmar got PInvoke working last week. He has
been doing more work now and has programs that actually print
information on the screen. Dietmar also got this week array support
into the runtime.
Paolo on the other hand added support for structures (not all of
his code is commited to CVS yet) and a few other correctness bits. He
is currently working on the functions and macros for generating code
dynamically. We are going to need this before than we expected
because of some opcodes that might need it (this will also give us the
PInvoke speedup that Dietmar wants to code).
Dick is working on extracting the GC system from ORP now and
putting that into Mono
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499891.42/warc/CC-MAIN-20230131222253-20230201012253-00744.warc.gz
|
CC-MAIN-2023-06
| 1,557
| 28
|
https://forcoder.su/practical-powershell-security-compliance-center/
|
code
|
English | 2019 | ISBN: 1734088908 | 366 Pages | True PDF, EPUB | 233 MB
PowerShell is an integral part of Office 365. This book was written with real world scenarios in mind. Authored by a seven year Office 365 MVP, this book will provide you with practical advice on how to use PowerShell to manage the Security and Compliance Center.
With their emphasis on security, Microsoft has placed high value on the Security and Compliance Center for Office 365. While managing Security for your tenant can seem daunting, having a powerful tool like PowerShell can make your life easier. This book will introduce you to tips, tricks and best practices for using PowerShell with the Security and Compliance Center while also introducing you to real world examples and guiding you to building your own scripts.
This book is aimed at those who know some PowerShell or are looking for ways to make their scripts better as well as help you become more confident in managing the Security and Compliance Center with PowerShell.
- Basic PowerShell
- Script building theory
- Practical application of scripting
- Real world coding examples
- New features like Exact Data Match and Information Barriers
- Building Labels with PowerShell only!
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371656216.67/warc/CC-MAIN-20200406164846-20200406195346-00438.warc.gz
|
CC-MAIN-2020-16
| 1,224
| 10
|
http://pag.aids2010.org/Abstracts.aspx?AID=10812
|
code
|
Expanding capacity for operational research: training of health workers in Peru
F.A. Canchihuaman1, M. Micek2, K. Gimbel-Sherr2, S. Gimbel-Sherr2, J. Zunt2, P.J. Garcia1, E. Gotuzzo1
1Universidad Peruana Cayetano Heredia, Lima, Peru, 2University of Washington, Seattle, United States
Background: A top agenda priority of many international
organizations is improving the local capacity in developing countries to
conduct Operation Research (OR). Creating a critical mass of professionals
capable of conducting and collaborating on OR is challenging, as is teaching
the skills required and maximizing the impact of OR training programs.
Methods: We developed a model for training of health care
workers in OR with focus on HIV and tuberculosis at the Universidad Peruana
Cayetano Heredia in Peru.
The educational approach included lectures, discussions, group-work and
field-work and provided comprehensive information about the specific objectives of OR. We invited both health professionals working in programmatic
activities and researchers.
Results: The training course (5 days), the first formal
courses in OR in Peru, allowed participants to pool their expertise in order to
identify and prioritize programmatic problems, propose solutions and use
suitable OR methodologies. The level of knowledge among participants improved
from 2.1 before the course to 4.2 after the course (scale 1-5) (p< 0.001).
Skills to design basic OR proposals improved from 2.1 to 4.3 (scale 1-5) (p<
0.001). Most participants (97%) found the course useful for their job needs and
reported they would recommend participation to their colleagues. Participants
developed short research proposals, most of which were not executed due to an
absence of political commitment, resources, time due to work demands, and
ongoing support after finishing the course.
Conclusions: A training approach that combines practical and
theoretical sessions and includes both programmatic and research health
professionals promotes multidisciplinary team-work and improves the
understanding of OR. Identifying priority areas in HIV/TB programs prior to the
course, allocating small funds, and providing continuing support and mentoring
after the course may improve the realization of OR projects. In the long term,
institutionalizing the OR training may help to improve local OR capacity.
Back to the Programme-at-a-Glance
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454702032759.79/warc/CC-MAIN-20160205195352-00331-ip-10-236-182-209.ec2.internal.warc.gz
|
CC-MAIN-2016-07
| 2,383
| 33
|
https://wallpapersite.com/en/knowledge-base/51888/design-considerations-for-a-custom-upc-code
|
code
|
Design Considerations for a custom UPC code
How far can a UPC be integrated into a package design?
I'm like to include a custom UPC code like the following as part of a proposal, but my boss is worried that if it's too "design-y", it will be too hard for a clerk to find, or that it won't scan properly. While I've found many examples, I haven't found much literature on designing custom UPC's. Are there any best practices for this sort of thing?
Here's an example of what a complex transformation might look like:
Your design does not meet the requirements for barcodes as outlined by GS1 US, the organization responsible for publishing and maintaining UPC standards. The standards are available on their website; specific information about best practices is available in the GS1 General Specifications (PDF link). Additionally, some retailers explicitly disallow this type of creativity in their package design requirements (see note at end).
The most immediate issue I see with your design is it is not human-readable. Sometimes barcodes just won't scan due to a number of reasons, so there needs to be a fallback. From section 5.2.3 on page 249 of the General Specifications:
Human Readable Interpretation
The human readable digits shall be printed underneath the main symbol and above the Add-On Symbol.
It also does not meet the height requirements (page 244):
In EAN-13, EAN-8, UPC-A, and UPC-E barcodes, the bars (dark bars) forming the left, centre, and right Guard Bar Patterns shall be extended downward by 5x (e.g., 1.65 millimetres (0.065 inch). This shall also apply to the bars (dark bars) of the first and last symbol characters of the UPC-A barcode.
It's difficult to tell from your image if that's a 3D interpretation of a "twisted box" or if that's the actual barcode you want to use. Either way, another aspect to consider is the symbol dimensions (page 243).
Nominal Dimensions of Characters
Barcodes can be printed at various densities to accommodate a variety of printing and scanning processes. The significant dimensional parameter is X, the ideal width of a single module element. The X-dimension must be constant throughout a given symbol.
If your design does meet all the relevant standards, there might be some room for creativity with barcodes. Some examples can be seen on this article:
As long as the fundamental vertical lines function properly, there are no real limits to what an artist can design around a standard bar code – as these real-and-working scabable images illustrate in stark black, white and monotone color.
One important thing to keep in mind is that some (usually larger) retailers explicitly do not allow for these kinds of barcodes ("animated barcodes", as I've seen it termed"). This would be detailed in their vendor barcode requirements document, if they have one. So, be prepared to swap it out for a standard barcode if necessary.
I would advise to not get "creative" with the bar code design. There are standards for using bar codes.
The basic idea of a bar code
Every barcode begins with a special start character and ends with a special stop character. These codes help the reader detect the barcode and figure out whether it is being scanned forward or backward.
How to barcode reads the lines
A barcode essentially is a way to encode information in a visual pattern that a machine can read. The combination of black and white bars (elements) represents different text characters which follows a set algorithm for that barcode type. If you change the sequence of elements you get different text. A barcode scanner reads this pattern of black and white that is then turned into a line of text your computer can understand.
Basically the scanner will read the widths between the black and white areas.
There are best practices to printing bar codes but you do have some wiggle room. The smaller the barcode the harder it will be for the scanner to read. In my experience for QR codes, I would not go smaller than 1".
Based on this information, I would not deviate away from using the standard UPC barcode that is the 1D (linear) code.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00568.warc.gz
|
CC-MAIN-2022-27
| 4,102
| 24
|
https://www.yujisato.com/projects-8
|
code
|
Here are some of the projects I've worked on in the past!
I primarily use Reaper, Cubase, and Wwise to create and implement my audio assets. I also have experience working in both Unity and Unreal.
Lyraflo: Music Theory in VR https://projects.etc.cmu.edu/lyraflo/
Composer, Sound Designer
Lyraflo aims to explore how the unique properties of VR could be used to convey music theory concepts to musically naïve audiences.
On this 5-person team, I worked primarily as a sound designer, communicating closely with the artist and programmers to create interactions the integrate both visual and audio feedback to teach music theory.
A Quick Overview:
Being constrained to certain music theory concepts was a big design challenge. For example, one of our prototypes explored how to convey the concept of major and minor using a mix of visuals and audio transformations. I was tasked with composing a series of musical pieces that could switch between major and minor based on player input.
This is an early version of the major minor prototype. I composed the music that can shift between various tonalities.
Composing Musical Pieces that Sound Good in Both Tonalities
In order for players to clearly hear the difference in tonalities, the various music had to be composed almost solely in either major or minor. When composing, I first had to map out the chord progressions and plan out the music so that no matter when the player changed the tonality, the music would still sound good. This meant I could only use chords that shared the same root in both major and minor keys, such as the first, fourth, or fifth degrees. The melodies also had to incorporate a lot of thirds in order to make major and minor clearly discernable.
Making the Music More Tolerable
As you can imagine, music written with just these chords and constraints could become a bit monotonous. My first instinct was to write polyphonic music to make things more interesting, yet doing so confused playtesters. As such, I opted to compose music that would be mainly monophonic, and instead played around with timbre and pitch to create more sonically interesting compositions. After making these adjustments, playtesters had much more success hearing the difference between major and minor, with 20 out of 22 testers successfully identifying the tonality of various music.
Drawing Players' Attention to Audial Cues
Another challenge we faced was that many players did not focus enough on the audio. In order to remedy this, here are two examples of how I used sound design to highlight and draw attention to audio changes first and foremost.
Creating Feedback Sequences
One such example is the use of time, and offsetting when the visual and audio feedback would happen. For example, when players triggered an interaction, we would often play the visual and audio feedback in sequence, instead of simultaneously. Having too much feedback simultaneously would force players to subconsciously choose one stimulus to focus on, which in most cases was visual stimuli. Playing around with feedback sequences created interactions that provided space for players to solely focus on audio feedback at given times, which better conveyed musical concepts.
Creating Soundscapes with Intent
Initially, Lyraflo's soundscape was always quite full, with music, ambience, and sound effects in every scene. After playtesting however, it became clear that the soundscapes needed more design focus. For example, when players are being introduced to a concept such as major or minor for the first time, they only need to hear music showcasing these tonalities. Any other sound would simply be a distraction. On the other hand, if the player is being taught how different tonalities can create different moods and atmosphere, having ambiences and sound effects became crucial. Thinking about what kind of information the player needs at a given moment, and creating the proper soundscape to support that need was crucial to Lyraflo's success. Having a few right sounds at the right moments was much more effective at communicating information than having a panoply of sound assets.
CloudWorks (Sound Committee), 2020
Composer, Sound Designer
I volunteered to sound design and compose for CloudWorks, a project that recreated the ETC Festival in a virtual space in order to accommodate for the remote environment the Fall 2020 semester took place in. Since the project team lacked sound designers, they asked for some volunteers to help set up a Sound Committee to sound design for the experience.
I worked on the Sound Committee with two other sound designers, Noah Kankanala and Tianyi Cao, to compose BGMs and create sound effects for the experience. I helped coordinate between the committee and the main project team in order to ensure that our sound design matched with the creative vision for the festival, and also took on the scheduling and task management for the large amount of work that needed to be done for this undertaking.
Working for Cloudworks was quite interesting in that I had to experiment with different styles of music to match the variety of environments and areas that existed in the world.
See Through Me, 2020
Composer, Sound Designer, Story Writer
See Through Me is a 2D point and click adventure developed by a team of 5 in 3 weeks. I helped write the story and create storyboards, then composed the soundtrack and sound designed for the experience.
The story is about a youthful romance, and I tried to capture the innocence and beauty of that by using shifting between A major and minor to create a bittersweet atmosphere.
Here is the link to the game:
Gnomes in a Robe, 2020
Composer, Sound Designer, Level Designer
Gnomes in a Robe a 3 player co-op game for AirConsole developed by a team of 5 in 3 weeks. I was the sound designer for this project, and also dabbled in a bit of level design as well.
I tried to capture the magical, yet whimsical and silly nature of this game by using instruments such as the accordion and bassoon to create a fun atmosphere. I also tried to shift between major and minor often to introduce a magical feel to my music.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473347.0/warc/CC-MAIN-20240220211055-20240221001055-00410.warc.gz
|
CC-MAIN-2024-10
| 6,132
| 33
|
http://blog.inspired.no/utf-8-with-asp-71/
|
code
|
I am currently working with internationalization of 24SevenOffice.com. We want to support languages like Chinese, Hungarian and even Right-To-Left languages like Arabic. To do so we must use Unicode. On the web the most common Unicode used it UTF-8. For a good introduction to Unicode, UTF-8 and other character sets read ‘The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)’ by Joel Spolsky. And also check out the Unicode site and I18n Guy.
In ASP and HTML there are a couple of things we must to do serve up UTF-8:
Response.ContentType = "text/html" Response.AddHeader "Content-Type", "text/html;charset=UTF-8" Response.CodePage = 65001 Response.CharSet = "UTF-8"
and the following HTML META tag:
<meta http-equiv="Content-Type" content="text/html;charset=UTF-8" />
We are using Microsoft SQL Server as a database. While it stores data as UCS-2 (Unicode) in Unicode fields (nchar/nvarchar/ntext), I have encountered problems with saving data in Chinese. It seems I have to use N prefix in front of all columns - i.e. UPDATE Table SET Field = N’Unicode Value’;. I am currently checking more around this issue. I really hope I don’t need to do this, if so I must say I am very disappointed with SQL Server - with Oracle this would not have been a problem.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00575.warc.gz
|
CC-MAIN-2021-39
| 1,340
| 6
|
http://english.ircfast.com/lv/software/download-page/kl13038.htm
|
code
|
Freeware: This program is free software
DirectX SDK includes a set of tools aimed at people who develop video games. If you want to design and build applications then rely on outstanding DirectX . With this controllers library you can increase sound quality and graphics display for Windows .
Thanks to new technologies DirectX, DirectX SDK can give surprising life to games and its graphics applications are of great interest .
Drivers are crucial to develop ...
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999709.4/warc/CC-MAIN-20190624191239-20190624213239-00082.warc.gz
|
CC-MAIN-2019-26
| 463
| 4
|
http://www.iglob.net/sd-times-news-digest-parasoft-releases-soatest-2021-2-windows-package-manager-1-1-python-3-10-floridanewstimes-com/
|
code
|
Parasoft has announced the 2021.2 release of Parasoft SOAtest, Virtualization, CTP, and DTP. The Parasoft Continuous Quality Platform API testing component will be available on October 12, 2021 to help teams overcome software quality challenges and achieve rapid, continuous delivery.
With this release, security testing moves into the developer workflow, and API testing strategies extend from development to testing to AppSec. In addition, the testing platform combines Parasoft SOAtests’ dynamic application security testing and smart generation with OWASP ZAP to identify API security vulnerabilities.
According to Kevin Greene, Director of Security Solutions at Parasoft “API security testing increases visibility in many enterprise applications as organizations strive to protect users and secure software.”
Windows Package Manager 1.1
Microsoft has announced the release of Windows Package Manager 1.1, eliminating bugs and adding some long-awaited new features. Windows Package Manager is released as an automatic update to Windows 10 and Windows 11 users via the Microsoft Store.
Among the highlighted features is client updates. It ships with two sources, the Windows Package Manager app repository and the Microsoft Store, and users can access the apps in the store.
Windows Package Manager is distributed directly from the Microsoft Store app installer.You can also download and install from .. For more information ..
Python 3.10 is now available
Recently, Python has released Python 3.10. This is the next major release of the Python programming language. This release provides users with many new features and optimizations, including parameter-specific variables, exact line numbers for debugging, and some structural pattern matching features.
In addition, this release has the ability to deprecate and prepare the removal of wstr members for PyUnicodeObject, write the communal type as X | Y, check the optional length-zip, and allow the context manager in parentheses. It also provides features such as deprecation of the distutils module. , And an explicit type alias.
For more information on the release, please visit: ..
SD Times News Digest: Parasoft Releases SOAtest 2021.2, Windows Package Manager 1.1, Python 3.10.
Source link SD Times News Digest: Parasoft Releases SOAtest 2021.2, Windows Package Manager 1.1, Python 3.10.
3 days ago
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363598.57/warc/CC-MAIN-20211208205849-20211208235849-00589.warc.gz
|
CC-MAIN-2021-49
| 2,367
| 14
|
http://ho-logos.blogspot.com/2009/04/observations-on-early-quran-manuscripts.html
|
code
|
I have recently digitised Gerd R. Puin's 1996 article Observations on Early Qur'an Manuscripts at San'a. The paper was originally published in English in The Qur'an as Text (edited by Stefan Wild).
If you leave your email address below I will forward a PDF copy of the article. Alternatively, in picture form:
[The order is currently incorrect, I will fix this shortly]
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249504746.91/warc/CC-MAIN-20190223142639-20190223164639-00240.warc.gz
|
CC-MAIN-2019-09
| 369
| 3
|
http://bigdata.sys-con.com/node/3141341
|
code
|
|By Bob Gourley||
|August 7, 2014 06:23 PM EDT||
By Bob Gourley
Editor’s note: X15 is new and exciting. We just finished a call with their leadership team. This firm is possibly very disruptive, in a positive way. The leadership views machine data in a new way with the result being a modern capability to this emerging challenge. For a hint of why see the below. – bg
X15 Software Launches X15 Enterprise
Hadoop-based machine and log data management solution offers dramatic improvements in scalability, manageability and total cost of ownership.
SAN MATEO, Calif., August 5, 2014 — X15 Software, Inc., a leading large-scale machine and log data management company, today announced the general availability of X15 EnterpriseTM, a revolutionary machine and log data management solution.
X15 Enterprise helps companies accelerate their machine data management deployments by providing an end-to-end solution for ingesting, indexing, searching and analyzing machine data. Purpose-built for petabyte-size machine data environments, X15 Enterprise enables IT organizations across all industries to solve their most demanding machine data problems.
Machine data is a valuable and fast-growing category of Big Data. Derived from system logs and other sources, this type of data is used to monitor, troubleshoot and optimize business infrastructures and operations. Machine and log data management are critical components of application performance management, security and compliance (SIEM), web analytics, Internet of Things (IoT) and many other enterprise initiatives. Though machine data analysis offers significant benefits, companies have been constrained by the first generation of machine data tools because they did not adequately address enterprise computing requirements.
“Machine and log data analysis is an integral part of enterprise data management. Unfortunately companies with multi-terabyte machine data requirements have been poorly served by inflexible, hard- to-scale and expensive tools currently on the market,” said Val Rayzman, founder and CEO, X15 Software. “With X15 Enterprise, our customers can now leverage the power of Hadoop to maximize the ROI of their operational intelligence initiatives.”
“The importance of machine data in business and IT efforts should not be underestimated. This data has historically been difficult to analyze and our research in big data analytics finds that only 42 percent of organizations are doing so today” said Mark Smith, CEO and Chief Research Officer, Ventana Research. “X15 Enterprise’s Hadoop-based architecture, scalability and openness enables organizations to easily access their machine data with a wide variety of analytics tools.”
Unique Product Designed for Petabyte-Sized Machine Data Environments
X15 Enterprise represents a dramatic leap forward in machine data technology. It is built from the ground up to take advantage of modern data processing technologies and is integrated with Hadoop offerings from Cloudera, Hortonworks, MapR and Pivotal. X15 Enterprise offers an impressive array of features and capabilities, including:
Modern, open and extensible architecture. X15 Enterprise’s modern and open architecture enables companies to avoid proprietary lock-in. Unlike other tools, X15 Enterprise allows machine data to remain in HDFS and does not require it to be duplicated into proprietary storage before it can be indexed and analyzed. X15 Enterprise queries are based on standard SQL rather than a proprietary language. In addition, X15 Enterprise interoperates with enterprise technologies from Informatica, Tableau and other JDBC/ODBC-compliant products. It is also fully embeddable via REST-based APIs.
Elastic scalability. X15 Enterprise is the world’s first Massively Parallel Processing (MPP) machine data platform. MPP gives X15 Enterprise the extreme scalability and performance advantage, perfect for easily analyzing petabytes of machine data. X15 Enterprise scalability is self-managing; partitioning is automatic, and the system rebalances itself online to take advantage of hardware configuration changes.
Automatic fault tolerance. X15 Enterprise has built-in, automatic, fault tolerance with no single point of failure to ensure the highest possible protection against data loss and high availability in case of hardware failures.
Real-time data indexing and querying. X15 Enterprise reads and indexes dynamic and static machine data regardless of its format or size. The data is indexed in real-time with a linear growth in performance as the cluster size increases, and is available for search and analysis as soon as it is ingested.
Search and analytic query environments in one platform. First-generation machine data tools focus on search and do not effectively address the need to perform analytic queries that aggregate and join large volumes of data. This insufficiency results in data “silos” that are not integrated with a company’s existing analytic environment. X15 Enterprise is the first machine data platform in the industry with the power and scalability to consolidate real-time search and complex BI on petabytes of data. With X15, companies can have “one version of the truth” and eliminate unneeded data redundancy.
Low cost of ownership. X15 Enterprise architecture, performance, usability and pricing model result in a dramatically lower total cost of ownership.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
Feb. 20, 2017 10:15 AM EST Reads: 115
TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets. By creating abundant, high-quality editorial content across more than 140 highly targeted technology-specific websites, TechTarget attracts and nurtures communities of technology buyers researching their companies' information technology needs. By understanding these buyers' content consumption behaviors, TechTarget creates the purchase inte...
Feb. 20, 2017 10:15 AM EST Reads: 863
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settle...
Feb. 20, 2017 09:30 AM EST Reads: 149
Almost two-thirds of companies either have or soon will have IoT as the backbone of their business. Though, IoT is far more complex than most firms expected with a majority of IoT projects having failed. How can you not get trapped in the pitfalls? In his session at @ThingsExpo, Tony Shan, Chief IoTologist at Wipro, will introduce a holistic method of IoTification, which is the process of IoTifying the existing technology portfolios and business models to adopt and leverage IoT. He will delve in...
Feb. 20, 2017 09:30 AM EST Reads: 1,177
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain.
Feb. 20, 2017 09:15 AM EST Reads: 332
In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).
Feb. 20, 2017 09:15 AM EST Reads: 168
@DevOpsSummit has been named the ‘Top DevOps Influencer' by iTrend. iTrend processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @DevOpsSummit ranked as the number one ‘DevOps Influencer' followed by @CloudExpo at third, and @MicroservicesE at 24th.
Feb. 20, 2017 09:00 AM EST Reads: 7,807
Tricky charts and visually deceptive graphs often make a case for the impact IT performance has on business. The debate isn't around the obvious; of course, IT performance metrics like website load time influence business metrics such as conversions and revenue. Rather, this presentation will explore various data analysis concepts to understand how, and how not to, assert such correlations. In his session at 20th Cloud Expo, Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Sys...
Feb. 20, 2017 09:00 AM EST Reads: 1,192
Information technology (IT) advances are transforming the way we innovate in business, thereby disrupting the old guard and their predictable status-quo. It’s creating global market turbulence. Industries are converging, and new opportunities and threats are emerging, like never before. So, how are savvy chief information officers (CIOs) leading this transition? Back in 2015, the IBM Institute for Business Value conducted a market study that included the findings from over 1,800 CIO interviews ...
Feb. 20, 2017 08:45 AM EST Reads: 1,403
SYS-CON Events announced today that Enzu will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive ad...
Feb. 20, 2017 08:30 AM EST Reads: 2,200
There are 66 million network cameras capturing terabytes of data. How did factories in Japan improve physical security at the facilities and improve employee productivity? Edge Computing reduces possible kilobytes of data collected per second to only a few kilobytes of data transmitted to the public cloud every day. Data is aggregated and analyzed close to sensors so only intelligent results need to be transmitted to the cloud. Non-essential data is recycled to optimize storage.
Feb. 20, 2017 08:15 AM EST Reads: 1,138
SYS-CON Events announced today that SD Times | BZ Media has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. BZ Media LLC is a high-tech media company that produces technical conferences and expositions, and publishes a magazine, newsletters and websites in the software development, SharePoint, mobile development and commercial UAV markets.
Feb. 20, 2017 07:00 AM EST Reads: 1,449
In the first article of this three-part series on hybrid cloud security, we discussed the Shared Responsibility Model and examined how the most common attack strategies persist, are amplified, or are mitigated as assets move from data centers to the cloud. Today, we’ll look at some of the unique security challenges that are introduced by public cloud environments. While cloud computing delivers many operational, cost-saving and security benefits, it takes place in a public, shared and on-demand ...
Feb. 20, 2017 06:30 AM EST Reads: 1,095
“We're a global managed hosting provider. Our core customer set is a U.S.-based customer that is looking to go global,” explained Adam Rogers, Managing Director at ANEXIA, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Feb. 20, 2017 03:15 AM EST Reads: 1,343
"We host and fully manage cloud data services, whether we store, the data, move the data, or run analytics on the data," stated Kamal Shannak, Senior Development Manager, Cloud Data Services, IBM, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Feb. 20, 2017 01:00 AM EST Reads: 5,221
As businesses adopt functionalities in cloud computing, it’s imperative that IT operations consistently ensure cloud systems work correctly – all of the time, and to their best capabilities. In his session at @BigDataExpo, Bernd Harzog, CEO and founder of OpsDataStore, will present an industry answer to the common question, “Are you running IT operations as efficiently and as cost effectively as you need to?” He will expound on the industry issues he frequently came up against as an analyst, and...
Feb. 20, 2017 12:00 AM EST Reads: 1,426
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
Feb. 19, 2017 09:30 PM EST Reads: 851
IoT offers a value of almost $4 trillion to the manufacturing industry through platforms that can improve margins, optimize operations & drive high performance work teams. By using IoT technologies as a foundation, manufacturing customers are integrating worker safety with manufacturing systems, driving deep collaboration and utilizing analytics to exponentially increased per-unit margins. However, as Benoit Lheureux, the VP for Research at Gartner points out, “IoT project implementers often un...
Feb. 19, 2017 08:00 PM EST Reads: 2,841
SYS-CON Events announced today that Technologic Systems Inc., an embedded systems solutions company, will exhibit at SYS-CON's @ThingsExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Technologic Systems is an embedded systems company with headquarters in Fountain Hills, Arizona. They have been in business for 32 years, helping more than 8,000 OEM customers and building over a hundred COTS products that have never been discontinued. Technologic Systems’ pr...
Feb. 19, 2017 06:45 PM EST Reads: 3,164
SYS-CON Events announced today that IoT Now has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. IoT Now explores the evolving opportunities and challenges facing CSPs, and it passes on some lessons learned from those who have taken the first steps in next-gen IoT services.
Feb. 19, 2017 06:15 PM EST Reads: 1,100
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00355-ip-10-171-10-108.ec2.internal.warc.gz
|
CC-MAIN-2017-09
| 15,481
| 59
|
https://success.clarizen.com/hc/en-us/community/posts/203972008-What-program-can-MAC-users-use-that-is-similar-to-MS-Project-
|
code
|
What platform are you using right now? Can you export to Excel? You can import projects into Clarizen from Excel using a tool called the Data Loader: http://www.clarizen.com/appsmarketplace/item/Data-Loader.html
What program can MAC users use that is similar to MS Project?
Is there a program that can be used on Mac that is comprable to MS Project, that would allow us to export from our existing PM software into this MS-Project-like program, which would then allow us to Import into Clarizen? We need to import existing projects/ tasks/ templates/ etc.
Please sign in to leave a comment.
The company uses a program called ViewPath. My goal is to get everything (or as much as I can- existing projects, templates, etc) Out of viewpath and into Clarizen. In theory I would be able to export from viewpath into MS Project and then import from MS Project into Clarizen...unfortunately I'm working with Mac. So, Im' looking for another solution, one that would allow me to get the data out of the existing PM software, and into the new one.
Okay, so I see 2 options:
1.) Export from ViewPath to Excel, and use the Data Loader to upload the projects into Clarizen.
2.) Save the ViewPath projects as .MSP files and email those files to someone on a PC that can save those as an XML files that Clarizen can read using Microsoft Project (there is a 60 day free trial of MSP for those that do not already have it installed).
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303884.44/warc/CC-MAIN-20220122194730-20220122224730-00358.warc.gz
|
CC-MAIN-2022-05
| 1,417
| 8
|
http://forums.pinstack.com/f53/blackberry_desktop_software_5_0_a-112450/
|
code
|
Yep, been out for about a month now
I was sitting here by the PC playing a game of Mobsters 2 and what did I see... a alert popup asking if I wanted to download an update to the BlackBerry Desktop Software. So of course I accepted, and I checked it out and it's the new 5.0! Looks like it's finally official!!
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720356.2/warc/CC-MAIN-20161020183840-00554-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 309
| 2
|
http://old.clubelo.com/Articles/AdaptivePoissonparametersandresulthistogram.html
|
code
|
To calculate the probability of football results, I have so far used the Poisson distribution for the number of home and away goals in a match. Depending on the difference in Elo points between two teams, the average number of goals scored and conceded were taken as parameters for two independent Poisson variables. Check this article for details.
However, the average values that I approximated through a curve were taken once and did not change since and I took present data to calculate past rankings. Moreover, using only the Poisson distribution does not reflect too well the nature of a game. Especially the likelyhood of a draw was never more than 27% and that underestimated the real percentages that can go up to 30%, 31%. I have been thinking for months how I could improve this without making the model too complicated.
Now I am coming up with an improved method that I am really happy with. It uses the strength of clubelo which is its comparatively large database. Two major changes are implemented. The Poisson variables are now adaptive and the set of past results will be used to predict results.
The core of the rating and prediction system is still Elo. For every difference in Elo points, there are two parameters: Average home goals and average away goals. Initially, I will use a distribution where the average number of away goals is 1.6 (changed on 14/10/2013, before it was 2.0) divided by the average number of home goals. The second constraint is that the result prediction from the combined Poisson distributions has to be equal to the prediction from the Elo system. This is done for every percentile and serves as a starting point.
The outcome of every match will influence the Poisson parameters in a way that the new parameters for average goals consist of 99.9% of the old parameter and 0.1% of the new result. This way, the parameters will smoothly approach their real values and will also change over time. Below you can see the current values:
As we have seen, the Poisson distribution is not sufficient to predict football results accurately enough. It seems that clubs do settle unnaturally often for a draw. It is very hard to find a simple predictive model for this behaviour. I decided that the way forward is not to try to simulate what happens but to see what happened in the past. For each percentile there are hundreds, sometimes thousands of matches in the database. I assume that calculating the distribution of these results is the best way of predicting what will happen in the future. It is done in the following way:
We start with an empty 2-dimensional result table for each percentile. When a result occurs, every other value in the table is multiplied by 0.999 and then 0.001 is added in the cell that corresponds to that result. If there are many games for a percentile, the sum of this distribution will approach one and recent games are weighted more importantly compared to old games. For every percentile, there is a difference between the sum of result occurences and 1 - sometimes more and sometimes less. This rest will be filled up with the predictions from the Poisson distributions. This should minimise statistical noise if there are not enough games. As the nature of 2-leg-games is different and not comparable to league of group matches, the results history method will not be applied to second leg matches.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484772.43/warc/CC-MAIN-20190218074121-20190218100121-00028.warc.gz
|
CC-MAIN-2019-09
| 3,376
| 7
|
https://www.allinterview.com/company/525/csc/interview-questions/133/struts.html
|
code
|
what is the space filled with in between the cell membrane and the cell wall when the cell is plasmolysed ??
Give the procedures in starting centrifugal pumps.
what is ststic with example
how do you plan to grow within an organization?
When we should use 'jobid' for commit table (Output table component?? How to use in abinitio graph?
what are the softwares helping for auditing poblems?
Distinguish value andPrice
what is the max. area can we apply the plate load test ?
How to pass workscontract sale invoice which includes WCT & VAT.
How to get best php developer Experience in Php with Sugar CRM / VTiger.
What are the different properties of an web object
What are the designing parameters used in WMLScripts?
a) Identify the following declarations. Ex. int i (integer variable) float a[l0](array of 10 real nos) int (*f())() void *f int (*f()) void *f int f char *(*f) () int (*f) float(*f) float **f int ******f
Explain about virus in RFID?
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202671.79/warc/CC-MAIN-20190322135230-20190322161230-00400.warc.gz
|
CC-MAIN-2019-13
| 954
| 14
|
https://www.thefloow.com/latest/apple-wwdc-part-two/
|
code
|
After a very long journey and interesting keynote speech from Apple CEO, Tim Cook, as detailed in the first instalment of my blog series from Apple’s WWDC, it was time to find out more about the updates announced the day before and further explore everything the conference has to offer.
Day 2 – Tuesday 4th June
After a 7am wake up, I checked a few emails and then made my way to the conference centre for day two. It is too far to walk from where I’m staying but rather than take a cab, I opted for one of the many electric scooters you can find around town which are a great low cost way of getting around.
Once you get to the conference, there are so many things on offer. Each hour there are four different sessions to choose from and lots of labs to attend where you have the opportunity to ask questions directly of Apple developers. As a result, there are some big choices to make about where you spend your time but luckily the WWDC app is there to help you do just that!
Here are a few of the sessions that I attended on day two of the conference:
What’s New in Xcode 11
This talk went into more detail about what was shared in the State of the Union address and considered areas including improved workflows which features better window splitting, a mini-map of your code (I remember this in Visual Studio around 15 years ago), and improved refactoring. The Swift Package Manager introduced last year now takes resources from many sources, such as Github, a welcome integration.
The new Dark Mode is now supported within Xcode providing us with the opportunity to see how our apps look in Dark Mode to ensure our app design is ready for these significant changes and features. I’ll be recommending that our design team catch up on this talk when I get back.
What’s New in Swift
Swift is one of the four programming languages we currently use and version 5.1 includes significant changes for us such as improved bridging to Objective C, improvements to the language making it easier to work with domain specific languages such as HTML and SQL and most importantly, faster startup times and reduced code size. This should allow our apps to open quicker for the user and reduce the chances of shutdown by the OS, providing a better experience for Apple users of our apps.
Exploring the Conference
WWDC provides plenty of opportunities to learn from Apple developers, I visited their labs to see if there was an easier way for our clients to package the apps we build when they want to sign them themselves. There is also a wide array of interesting people to chat to who are milling around the conference such as a guy I spoke to over breakfast who builds an app for wine growers in the Napa Valley.
When not listening to some of the interesting talks on offer, it’s great to look around some of the other sessions and attractions. One on machine learning inspired some thoughts about things we could do in the future, whilst there was also the opportunity to have fun and look ridiculous by playing an augmented reality ball game where you have to knock down an opponent’s skittles.
Day 3 – Wednesday 5th June
Apple have included a range of sporting activities in the programme for WWDC and as today is Global Running day, they staged a 5km run led by a Boston Marathon winner. Normally I’d be taking part, but I’ve been nursing a knee injury so an extra hour in bed seemed like the most sensible option!
Now that I know my way from the hotel to the conference centre, the scooter ride was much shorter giving me the opportunity to wander around the WWDC shop and have a look at the conference merchandise. While I was in the queue, there was a guy taking photos of his friends, he asked them to say ‘cheese grater’ to get a smile (in reference to the newly unveiled Mac Pro)… it was a reference that certainly made me smile too!
More on Core Location
This turned out to be the most relevant talk so far, the speaker went into great detail about changes to user authorisation of location data collection. The decision to allow an app to collect location data will no longer be an upfront decision, making it clearer for the user but for The Floow, and providers like us, we will need to implement location authorisation much more carefully going forward. There will also be the introduction of a ‘just once’ authorisation which again could affect our apps significantly if a user chooses this option.
Other Interesting Talks & Features
There’s a new API to manage background tasks which could be useful for users to balance faster journey sync and battery saving by slowing journey upload and sync. The privacy updates cover a number of areas including Apple sign-in, permissions for Bluetooth tags and a new API called CryptoKit for encrypting data.
There was also a talk on low-level optimisations introduced into the compiler which was very technical and brought back nostalgic thoughts of my first job in the games industry but I learnt some new ways to reduce our app size which should result in a better performance for users.
The final session, although not relevant to The Floow, was a panel discussion on the Health app. It was interesting to learn how the app morphed from pure sports app to general health app and how team members with very different skills, such as engineers and physicians, collaborated to create this.
Keep a lookout for the final instalment of my blog from this year’s Apple WWDC which will be available later this week. In the meantime, don’t forget to check out part one!
Share this article
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475833.51/warc/CC-MAIN-20240302152131-20240302182131-00809.warc.gz
|
CC-MAIN-2024-10
| 5,571
| 24
|
https://dcx.sap.com/1100/en/dbreference_en11/create-remote-message-type-statement.html
|
code
|
Click here to view and discuss this page in DocCommentXchange. In the future, you will be sent there automatically.
Use this statement to identify a message-link and return address for outgoing messages from a database.
CREATE REMOTE MESSAGE TYPE message-system ADDRESS address
message-system: FILE | FTP | SMTP
message-system One of the supported message systems.
address The address for the specified message system.
The Message Agent sends outgoing messages from a database using one of the supported message links. Return messages for users employing the specified link are sent to the specified address as long as the remote database is created by the extraction utility. The Message Agent starts links only if it has remote users for those links.
The address is the publisher's address under the specified message system. If it is an email system, the address string must be a valid email address. If it is a file-sharing system, the address string is a subdirectory of the directory set in the SQLREMOTE environment variable, or of the current directory if that is not set. You can override this setting on the GRANT CONSOLIDATE statement at the remote database.
The Initialization utility creates message types automatically, without an address. Unlike other CREATE statements, the CREATE REMOTE MESSAGE TYPE statement does not give an error if the type exists; instead it alters the type.
Must have DBA authority.
SQL/2003 Vendor extension.
When remote databases are extracted using the extraction utility, the following statement sets all recipients of file message-system messages to send messages back to the company subdirectory.
The statement also instructs dbremote to look in the company subdirectory for incoming messages.
CREATE REMOTE MESSAGE TYPE file ADDRESS 'company';
|Send feedback about this page via email or DocCommentXchange||Copyright © 2008, iAnywhere Solutions, Inc. - SQL Anywhere 11.0.0|
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949573.84/warc/CC-MAIN-20230331051439-20230331081439-00099.warc.gz
|
CC-MAIN-2023-14
| 1,921
| 15
|
https://magnocentrojoyero.com/blog/anavar-proviron-clen-cycle-68am
|
code
|
anavar and test stack
anavar dosage reddit
anavar clen winstrol cycle female
It's rather a fantastic in addition to helpful item of details
anavar gyno reduce
anavar before and after pictures
price of anavar in australia
where to buy anavar legally
Its what we call an accident and people have to live with killing someone as it is
winstrol anavar cycle
nolvadex nakuur anavar
It is herbal, natural and organic
anavar proviron clen cycle
Using a 4-hybrid, Piersimoni aced the 176-yard 16th while playing with John Bartkovsky, Dannielle Jenkins and Kurt Blackledge.
anavar 10mg online
test e anavar cycle dosage
anavar reddit wiki
Thanks for the beautiful posting Drue.
test eq anavar results
meditech anavar price in delhi
anavar winstrol tren test
meditech anavar 10mg price
African cowrie-shell beads in an effort to attract the attention of Judge Theodore McMillian, an African-American.
anavar female before and after reddit
Of course some f the more extreme ones explode n episodes of stress, but the others re there all the time.
testosterone cypionate and anavar cycle results
injectable anavar review
anavar hair loss side effects
anavar only cycle results pictures
The application can now be extended to other types of trials and interfaced with electronic health record applications
hi tech pharmaceuticals anavar supplement review
stanozolol and anavar cutting cycle
anavar dosage for dogs
I want to encourage you continue your great job, have a nice holiday weekend
test anavar and winstrol cycle
o en el buzn de tareas. To put it accurately oahu is the towel of preference for virtually any that are
anavar or masteron for cutting
third is that for Prozac generic fluoxetine 40 mg mexico that lexapro vs Prozac this christina ricci
how much does anavar cycle cost
Podran ascender hasta é que vienen en.
anavar vs winstrol forum
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00422.warc.gz
|
CC-MAIN-2021-21
| 1,841
| 41
|
https://forum.devolutions.net/topic30886-apply-message-prompt-to-entire-folder.aspx
|
code
|
if i have a folder and i want every session in it to be with a message prompt, i don't have the option for it
if i edit a single session, no problem. i have the "events" tab and i can edit the message prompt before connect
but i can't do that for an entire folder, the same way i can do for vpn, but not for events
Events are only manageable directly on entries. We do not have the option by folder. I transfer this topic in our Feature Request section.
Although our various support queues will be monitored for emergencies, Devolutions' offices will be closed on September 2nd 2019.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330913.72/warc/CC-MAIN-20190826000512-20190826022512-00414.warc.gz
|
CC-MAIN-2019-35
| 583
| 5
|
https://exrna.org/resources/data/
|
code
|
The exRNA Atlas is developed and maintained by the Data Management and Resource Repository (DMRR). It includes exRNA profiles derived from various biofluids and conditions and currently stores data profiled from small RNA sequencing assays. The datasets are uniformly processed using the exceRpt small RNA-seq pipeline. Faceted filtering and data navigation tools are enabled by rich metadata standards developed by the consortium and metadata annotations contributed by the data producers. Uniform data quality metrics agreed by the consortium are applied to all datasets. The Atlas will be updated regularly with new profiles. Using the consortium login option gives ERC consortium members access to datasets which have not yet been pubicly released.
View this video tutorial to explore the various features available in the exRNA Atlas.
The wiki at the ERC Consortium's Data Coordination Center includes information about the DCC data submission pipeline.
The DMRR has developed several use cases based on consortium members' datasets and publications. The purpose of these use cases is to highlight the software tools developed by the DMRR, with the goal that all future datasets generated by the consortium can be analyzed reproducibly and compared to each other.
2019 Update: See Murillo et al. for an integrative analysis of data in exRNA Atlas version 4P1. The use cases below use older versions of exceRpt and other software. They are good introductory tutorials, but please use them with caution. If you need help with a use case not covered by what you see here, please contact us at info@exRNA.org.
Murillo OD et al. "exRNA Atlas analysis reveals distinct extracellular RNA cargo types and their carriers present across human biofluids" Cell (2019) 177: 463-477.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474669.36/warc/CC-MAIN-20240226225941-20240227015941-00696.warc.gz
|
CC-MAIN-2024-10
| 1,774
| 6
|
https://intouchtechnology.zendesk.com/hc/en-us/articles/4401755008788-Game-Play-Report
|
code
|
About this report
The Game Play - Lead Activity Report ranks staff member’s activities against other staff. Staff members earn points based on the value of the sales activities they complete.
Ideas for use: Use a typically slow sales month to create a friendly employee competition. First person to 100 points gets a gift card; person with the highest number of points at the month gets a vacation day, etc.
To access this report
- Log into Drive and select the Reports menu
- Select the Live Reports section
- Find (or search for) the report named Game Play (or Game Play - Lead Activity)
- Select View
- A new window should appear allowing you to enter your report parameters
- Select a Club (this legacy report is available in single-club format only)
- Select a month (this legacy report is available in one-month segments only)
- Select from a PDF or CSV format
- Select Run Report
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303779.65/warc/CC-MAIN-20220122073422-20220122103422-00131.warc.gz
|
CC-MAIN-2022-05
| 888
| 13
|
https://www.nanostuffs.com/blog/?p=5071
|
code
|
Analysts claim the connected devices to reach the mark of 6.5 billion by the starting of 2017. This makes PHP and IOT go hand in hand. Thanks to icicle, one can write asynchronous code using synchronous coding techniques in PHP. It means that now the PHP code is able to run several tasks by using the same script. Methods of asynchronous programming provide better data exchange between connected gadgets. Some hardware platforms such as Arduino already support PHP, and you can control Arduino board with your PHP-based script. There is also a possibility to build a PHP application that uses GPS data gathered from an IOT device. Like the GPS sensor on your Android phone can send its location to the Bluemix cloud, and the PHP application is able to publish this data on your website.
PHP for the Internet of Things
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648858.14/warc/CC-MAIN-20230602204755-20230602234755-00566.warc.gz
|
CC-MAIN-2023-23
| 819
| 2
|
https://www.my.freelancer.com/job-search/create-business-ledger-database-microsoft-access-2007/
|
code
|
Pembutan program dari bahasa pemerogaman c# misalnya : prohram absensi, program kasir dll
Saya ingin membuat aplikas...Account bisa dinamis tanpa perlu merubah coding program. Yang saya perlukan adalah : 1. Desain database untuk Financial Accounting (mySQL based) 2. Query untuk menghasilkan laporan yang diinginkan : - Laporan Jurnal - Laporan General Ledger - Laporan Trial Balance - Laporan Balance Sheet - Laporan Profit & Loss
saya ingin membuat artikel yang berguna untuk masyarakat dan nantinya akan saya bagikan di media sosial dengan editan foto juga..
Create Artist profile in Microsoft PPT with mentioning all her achievements, work profile, her art & craft work samples etc. so that same could be shown to different higher officials as a Portfolio.
I need some graphic design.
I need Java Developer can work on Microsoft Kaizala to develop Action cards
Xamarin Native (Not Cross Platform) - A simple screen needed (check attached) - Should be able to connect to SQL server, show message connected, failed - After connection show records from a table "select * from user" (2 fields)
edit microsoft world outline table and page number issue each title must be assign to the right page number
...Operators, Agents, Customers, Fixed Deposit, Recurring Deposits, Daily Deposit, Withdrawals, Loans, Savings accounts and other different schemes maturity calculation and Advisor Business and commission calculation highly fleet-footed and reliable. It helps in keeping an eye on various aspects of Society and make the most accurate supervision and calculation
I need quick help in converting a SIMULINK model into a C++ desktop application. The model is very simple and it's basically for an autonomous mob...model is very simple and it's basically for an autonomous mobile robot. I know it can be easily done through Simulink Coder but I prefer if someone could write smart codes in Microsoft Visual Studio 2017.
I have three "Saved Imports" I want VBA Code to upload each ...code for each. I am assuming you don't need the excel sheets and you can use the Saved Imports to do this task. I know this is not a big task for someone that knows VBA and Access so please don't waste our time with a high bid. This is a 15-20 min project if you know what you're doing.
...to setup and use digital ledger technology. You can model your business network easily then test it on the IBM playground. The default setup & tutorials do not include how to get data from other devices into the ledger. Use the default Hyperledger tutorial setup - easy to follow. I can set this up and provide remote access - or the dev can do that
...we will want to initially place them on the blockchain or hyper-ledger depending on the use case discussion we have. Functions: - CRUD operations on adding fruits and vegetables to the blockchain or ledger - Ability to search and explore the chain and contents of app. - Users to create their own marketplaces for buying and selling items. - Transact
...progress by not voting One double-spend attempt at the ledger level Full source code documentation Unit and property testing Blockchain and ledger Haskell data types to represent the block chain Chain validity check implementation (chain hashing) Data types for transactions (as block contents) Ledger validity check implementation (signatures and balances)
I need the following dollar amounts added up together. T...Hartwig $999 2-15-19 Shawn May $1,999 2-25-19 Bruce McKeon $999 2-27-19 Walter Wilcox $999 Harold Walter $999 Nancy Rettger $1499 Peter Beauregard $999 Frank Vross $999 Mike Ledger $1999 Barbara Hinkle $999 Daniel Arvizu $1,999 Patrick Occoner $400 Wayne Schindler $1,999 George Pender $1,000
I need to know if somebody that can help me create a store from scrap in Visual Studio Microsoft and can pull database from SQL visual studio. For more information on it this project and if your will to help me build this working website from the bottom up let's chat more about the details.
...quote them integration SAGE 300 with PHP application. The PHP application help with Procurement. Therefore we would like to integrate at Purchase order, supplier invoices and ledger accounts. Can you assist with this? While you might need to see the PHP app, can you give me a rough estimate of how much you would charge and how long it takes to do the integration
...España, estos son los puestos. Se necesita hablar inglés fluidamente. SAP HANA Master Data Governance all 4 objects (customer, Vendor, Product and Material) FICO - General Ledger (Display) Customer Care Transport (Logistics) Indirect Procurement - SRM Requisitioning including Opentext Contract Maintenance Plant Maintenance including Spare Parts
...to their corresponding cost category columns. We have 12 statements that need this formula added that will post the amount of the debit to its correct column based on the "Ledger for Reference" column cost category that has already been added for each transaction. it just needs to be copied to its matching column for that cost category. We have included
...computer running Terminal Services. You must use a volume licence edition of office." We do not have access to download the volume licence edition or have access to licence servers. Is there a way of setting this up on the new server. We have access to the old server if needed. If you apply for this, we need immediate assistance. Please send us a
1. Python backend 2. Web-based frontend 3. Integration with Microsoft's Speech Software Development Kit (SDK) 4. POC is streaming two minutes of text and then displaying the output as it comes in within the browser If the architecture can be simplified compared to what has been provided in the image that is fine. Looking to spend no more than $150.
POS CUSTOMER LEDGER SUPPLIER LEDGER BARCODE PROMOTION STOCK MANAGEMENT MULTIPLE UNIT OF MEASURE RETURNS AND REPORTS FOR ALL TRANSACTIONS SECURITY PER MODULE: VIEW,INSERT,EDIT,DELETE,DATE. SCREEN DISPLAY BARCODE GENERATION BARCODE LABEL PRINTING. Sample web Application available to use as guide.
Hello, We would like to switch our existing software from Quick Books Enterprise to Microsoft Dynamics. Please share your experience with past work. Thanks
Hi Freelancers, I need help with a Microsoft excel project, specifically I need some data analysis done, which involves calculating total time spent in a hospital and then conducting multiple regression on various variables such as (doctors, screening time, insurance type, day of the week etc.). Will share more details when I find the right person
...Engine • HTML/CSS Access • Facebook Store • Abandoned Cart • Advanced Reporting • Multi-Lingual & Multi-Currency • Multi-Store • Affiliate Management • Android Application • iOS Application • Multi-Vendor Store Mutli – Vendor Feature : • Seller Registration and Approval Flow • Seller Commissions • Seller Product A...
We need a person that is bilingual Japanese/English to deliver a Microsoft Outlook class (content will be provided) via Webinar. The trainer will deliver the class in Japanese. The class is scheduled for Monday 3.18.19 at 8pm.
Report on banking crisis during 2007 - 2008 in Europe
Currently, have youth registration using Microsoft Forms. I want to be able to take the Parents name and email and add them to our Microsoft Exchange External Contacts section and add them to the exchange distribution group No-reply2. We are using Microsoft 365 for all solutions
We have a timesheet program running off of Access 2003 files. Ever since updating to Access 2016 we have been having issues with the program. We need the current issues resolved, and we need the program updated to work with Access 2016.
We need to get done a professional Microsoft Access template for sales people. This project need to have: 1. Employees (with all the details) 2. Products (with product name + product category + price) We need to track their sales, to see how many products they ordered, how much they need to pay for (each sales person will have a comission from the
...order book management and order book history accounts management client importance and product importance register invoice formatting with the tools of invoicing all ledger Excel or pdf output offerS Cash and Bank management invoicing direct from the order purchase from the other shop on our platform, Inventory automatic or manual Multiple
Infinite Software Solutions, Inc has an Event and Association Management that we want to integrate with the Microsoft 365 and Dynamics 365 products. We are looking for a 6 month 40 hour per week commitment for this project. The following is required to work on this project. Full Stack Developers Skills and Expertise Required: • Language - Fluent
I have a server which works on Microsoft Server 2012 R2 I need to set it up to work online
I have an access file with many data fields and id like to turn some od these tables into interactive forms to make adding and updating the data easier. I need this done within the next 4 hours, and can discuss stage 2 following completion of stage 1. Request: 1. Build a Form that allows me to filter by 3 criteria (performance year, ticker / company
Expert needed to get MSSQL running on a Windows Server. It doesn't seem to be running even though it has been installed. This is to only get the software running - actually setting up the databases will be done by a third party import software.
We use an older Microsoft CRM 2011 environment. A previous developer created some custom plug-in assemblies which I would like to review and hopefully disable if no longer needed. Problem is we have no one currently on staff that has any experience doing this sort of thing. Looking for someone with some experience in this area to assist me. Short
I have a program that was created originally under access 97 that is not compatible with the newest version of access. I would like to hire a developer to make the appropriate corrections so that it becomes functioning. The program uses time value of money calculations to amortize financed debt based on certain sequential formulas. It also calculates
very straight forward Produce and Excel Export Data files from a MS Access 2007 / MS SQL server 2005-08 Database to pre defined Excel format and put the option on the menu
I just need a developer who have experience with sharepoint restfull api that can get accessToken with my sharepoint account and call an api that create a folder in my sharepoint [log masuk untuk melihat URL]
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204768.52/warc/CC-MAIN-20190326014605-20190326040605-00307.warc.gz
|
CC-MAIN-2019-13
| 10,528
| 39
|
https://apache.googlesource.com/freemarker/+/f97784a750b1ecda6f81a18d11ff76dd3243ec2d
|
code
|
|author||ddekany <firstname.lastname@example.org>||Wed Jul 31 13:19:59 2013 +0200|
|committer||ddekany <email@example.com>||Wed Jul 31 13:19:59 2013 +0200|
- Allow the debugger client to attach arbitrary data to the Breakpoint, that it will get back with the suspension event. This could be the source java.io.File for example. - DebugBreak doesn't extract the template name from the Breakpoint anymore, but uses the actual template name (they should b the same anyway). This is how it was earlier too.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587908.20/warc/CC-MAIN-20211026134839-20211026164839-00065.warc.gz
|
CC-MAIN-2021-43
| 502
| 3
|
http://foulweather.blogspot.com/2014/01/great-ocean-quarterly.html
|
code
|
Cymru To Cascadia Via Dilmun
18 January 2014
Great Ocean Quarterly
New Magazine out of Australia that I have a small story in.
Looks like a breath of fresh air. Long live print.
I'll post a review when I get my copy.
Check it out here.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591150.71/warc/CC-MAIN-20180719164439-20180719184439-00636.warc.gz
|
CC-MAIN-2018-30
| 235
| 7
|
https://www.wattpad.com/user/wally_lawless
|
code
|
I'm a web developer here at Wattpad but I've loved reading since I was a kid. I've always had a soft spot for the Star Wars expanded universe and read pretty much anything geeky.
Hopefully you're loving Wattpad, I know we are certainly working hard to make it an awesome experience for both readers and writers. If there's something that can be better I'd love to chat with you about it so send me a message.
- Georgetown, Ontario
- JoinedMarch 26, 2013
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257920.67/warc/CC-MAIN-20190525064654-20190525090654-00113.warc.gz
|
CC-MAIN-2019-22
| 453
| 4
|
http://www.schooltechnology.org/blog/2011/07/06/skipping-ms-office-2003-2010
|
code
|
My district announced a few days before the end of school that they would be updating us from Microsoft Office 2003 to 2010 (skipping Office 2007) during the summer, so when we get back in August we will all be running Office 2010. I thought this was really cool, but then the emails started to flood in from teachers panicking over the announcement. People were worried about the upgrade and having to learn how to use the new Office platform.
So I was happy when I was reading Atomic Learning's blog today when I found the post about how to transition from MS Office 2003 to 2010 with a great video that shows you exactly how to do it. I have forwarded this post to all my teachers so that when they back, they will have this handy little video that will show them everything they need to know. Thank you Atomic Learning.
- Brad Flickinger, Bethke Elementary School
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530176.6/warc/CC-MAIN-20190421040427-20190421062427-00281.warc.gz
|
CC-MAIN-2019-18
| 867
| 3
|
https://windowsforum.com/threads/gadgets-issue.13694/
|
code
|
I am running Vista Business on a Dell Latitude D620. The other day when I booted up, I noticed my gadgets were all weird. They were either not loading completely, or some were simply little white boxes. I have seen the issue resolved elsewhere on the net by fixing .dll files and whatnot, and I have tried several of these fixes, but to no avail. Then today, I started up AIM6, and I found that the box where I would normally view a conversation is completely blank no matter what. I cannot see a history of the conversation; what I write, or what the other person does. I feel like these things are related because AIM was working fine before the gadgets issue. Any thoughts?
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323604.1/warc/CC-MAIN-20170628101910-20170628121910-00303.warc.gz
|
CC-MAIN-2017-26
| 676
| 1
|
https://stevehackman.libsyn.com/2021/08
|
code
|
Steve Barker joined us back in Episode 41 to talk about Re-Thinking Church and now comes to my rescue to help spare me of the boredom of a mandated 14 days isolation in a Hong Kong hotel room. Along the way we discuss the COVID situation, my confirmation in the Anglican church, and making the Christian faith practical in the world around us.
Go Beyond the Pale with Steve Barker!
1:23 - Intro with Steve & Tammy
8:37 - Conversation with Steve Barker begins
9:00 - The effects of quarantine isolation
15:15 - When do we just learn to live with Covid?
22:00 - Traveling Internationally during Covid is expensive
29:32 - Joining the Anglican church
38:19 - Musing on the need for "church"
48:26 - Practical applications of Christianity vs. "Finding God's Will for my Life"
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511284.37/warc/CC-MAIN-20231003224357-20231004014357-00461.warc.gz
|
CC-MAIN-2023-40
| 771
| 10
|
http://drake.diei.unipg.it/software/opjviewer_jpeg_2000_viewer
|
code
|
OPJViewer JPEG 2000 viewer
- Opens single JPEG 2000 codestreams and file formats
- Opens Motion JPEG 2000 sequences (no real-time playing, though)
- Parses J2K, JP2, and MJ2 structures.
- Shows MXF structure in the log panel.
You can ask for more fetures or report any discovered bug, in this section of our forum.
OPJViewer for Windows.
OPJViewer for Linux.
Encoder settings window.
Decoder settings window.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100484.76/warc/CC-MAIN-20231203030948-20231203060948-00078.warc.gz
|
CC-MAIN-2023-50
| 408
| 10
|
http://youtubemusic.ws/power-metal-angkara-official-video-music-brazilian-reaction-cedrix-reaksi/
|
code
|
POWER METAL – ANGKARA -OFFICIAL VÍDEO MUSIC – BRAZILIAN REACTION CEDRIX REAKSI
#powermetal #powermetalreaction #indonesia #angkara
Original vídeo :
Disclaimer Under Section 107
of the copyright Act 1976, allowance is made for “fair use” for purposes such as criticism, comment, news reporting, teaching, scholarship, and research.
Fair use is a use permitted by copyright statute that might otherwise be infringing.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103329963.19/warc/CC-MAIN-20220627073417-20220627103417-00135.warc.gz
|
CC-MAIN-2022-27
| 425
| 6
|
http://forums.cnet.com/7723-21579_102-44955/laptop-decisions-dell-compaq-hp-or-dell/
|
code
|
Laptop Decisions -- Dell, Compaq/HP, or . . . Dell?
by Ryan T - 11/9/04 8:12 PM
Hi there, I'm in the market for a laptop in the next two months, and I'm having something of a dillema. There are too many to choose from.
Right now I'm using a Dell Inspiron 4000 P3-800. It's okay . . . but it just barely runs Visual Studio.NET 2003 and now I have 2005 so I'm ansy for a new mobile system.
I'm torn between a couple of different systems.
Centrino (It encompasses 3 of my 4 needs)
So, it seems that all these systems fulfill most of my needs and wants.
What I'm concerned over is, while I'm head-over-heels in love with the 700m, it's so beautiful, it has so many features, and is just darn cool, but I'm concerned about the keyboard and screen size. Am I really going to be squinting that badly at it? The screen is ~12" diagonal WXGA.
So I consider the Compaq X1000 . . . that seems like a nice notebook. I had a Compaq notebook not that long ago, and it was good, but I know nothing of real-life experiences with their new laptops. Does anyone have horror stories?
Same goes for the HP. All these computers have roughly similar pricing and specs, and all are highly rated by Cnet, but I'm looking for some more opinions.
Becuase then there's the 600m. It's the safe bet. Centrino, CDRW, Battery life, b/g wireless, but it doesn't have that sexy wide screen.
So if Dell is better and the small screen of the 700m would really kill my eyes . . . then I'll just go with the 600m . . . what do you all think?
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932182.89/warc/CC-MAIN-20150521113212-00084-ip-10-180-206-219.ec2.internal.warc.gz
|
CC-MAIN-2015-22
| 1,504
| 12
|
https://analysis-situs.medium.com/build-a-swept-volume-for-a-turned-part-860f91827693
|
code
|
Build a swept volume for a turned part
Lathe machines are commonly used for manufacturing because of their relatively low cost and inherent simplicity of the process. At the same time, lathes are often combined with milling that serves as a post-process for finishing up a turned workpiece with non-rotational features.
There are several questions you might want to ask looking at a turned part at hand:
- How much of it can be manufactured on a lathe machine? Well, to be cost-efficient, the more the better.
- What is the envelope swept by this turned piece when rotating? That is something you might need for collision checks, e.g. in milling simulation.
This article aims at describing the computational principles one might want to follow to build up a swept volume of a rotary part (that’s a special case of a swept volume problem). You only need to know the axis of rotation, which might be the lathe axis or a spindle axis for a milling tool. The purpose of the algorithm is to build the minimal solid of revolution containing the original part. This solid is sometimes named a Maximum Turnable State (MTS) of the geometry [Yip-Hoi, D., & Dutta, D. (1997). Finding the maximum turnable state for Mill/Turn parts. CAD Computer Aided Design, 29(12), 879–894]. From a lathing standpoint, MTS represents an intermediate state of the workpiece geometry from which no more material can be removed by turning. The result of a Boolean subtraction between a stock shape and the MTS gives what’s called the Maximum Turnable Volume (MTV).
Simple & stupid
The MTS is a solid of revolution and that kind of a solid is obtained by sweeping a generatrix curve around an axis. Therefore, if we want to build MTS, it’s sufficient to extract the generatrix curve. Although there are some attempts to generate MTS based on the recognized turned features of a model, I wouldn’t go for such methods unless I really have to. The reason for that is the ultimate complexity of the approach we end up with. If five years spent on feature recognition taught me something, it’s that feature recognition is far never perfect to rely upon. Whenever I can do the job in a simple and stupid way, I would prefer doing so and not overcomplicate the thing.
One simple approach was reported by M. Watkins et al [Watkins, M., Rahmani, K., & D’Souza, R. (2008). Finding the maximal turning state of an arbitrary mesh. 2007 Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference]. They sample the rotation axis (presumably OX) with the even distribution of slices and then find the intersection points between the slice planes and mesh edges. Since the slices are all sorted, the extreme values of the intersection points are sorted as well. It’s trivial then to construct the profile curve. Well, this really sounds simple and stupid, so let’s give it a try.
Mesh data structure
One should carefully choose the mesh data structure. The good choice will pay off, while an unsuitable data structure will make you constantly working things around instead of focusing on the problem. And you know what? There seems to be no such data structure that would serve all the cases equally well. While worked at OCC, I was somehow blindly assured that everything should be based on Poly_Triangulation, which is the simplistic mesh representation for facets distributed by B-rep faces. Then, OCC came up with another data structure (Poly_CoherentTriangulation) that offers a more editable and iteratable mesh, although it’s not a part of the OpenCascade data model anymore. Something could also be grabbed from Salome if you manage to find where it’s all hosted.
It took time and it required a certain shift in my mindset to realize that the data structures should always be chosen application-wise. There’s no such a thing as a “good” data structure. There could only be a data structure good enough for a specific problem. That’s actually a trivial foolish discovery if you think about it, but the implications are severe. In practice, it means that instead of componentizing yet another “now perfect” data structure (look at OpenMesh as an example), it might be better to copy & paste your stuff for exclusive use in your target algorithm. For example, a decimation algorithm might require a half-edge data structure. Working with dirty scans might demand a somewhat less restrictive data structure that perhaps allows for non-manifold triangles. Who said that copying & pasting code is a bad thing? I call bullshit on that.
Then, there’s also a question of relationships between your mesh data structure and the CAD geometry. Are they divorced? Do you have a reference mechanism to attribute a single facet as belonging to a certain CAD face? And there’re many questions like that. Long story short, for the MTS algorithm, I use a variation of poly_Mesh hand-made data structure. It’s not ideal by any means, but there’s one killing feature of this in-house data structure: I can do whatever I want with it and disregard any product boundaries because there are no boundaries to respect. I simply don’t care about the architecture of any 3-rd party product. No product no pain. Free love.
The algorithm assumes that the axis of revolution is aligned with the global OX axis. Such an assumption does not take away the generality of the approach as you can always reorient your part to satisfy that requirement. All in all, the implementation of M. Watkins’ algorithm is pretty straightforward as all you need is to find the intersection points of all mesh edges with the slicing planes. I’m not sure it’s written in the original paper, but you also have to check that the intersection points fall inside the corresponding edges to avoid false-positive hits outside the edge’s domain. Protection against infinite points is another thing to handle. And, finally, there is a typo in the formula for computing the axial distance (that would be correct if the main axis was OZ, which is, by the way, more natural for turned parts).
One obvious drawback of M. Watkins’ algorithm is that it indifferently skips feature edges. From the image below, one could easily see that the probe points are rarely coincident with the model edges. The initial distribution of slices by bins leaves little chance to capture sharp corners. That is the price of simplicity, and the trivial way to improve the result is using many more bins (e.g., I ended up with 100).
The value stored in each bin is the radius value we are looking for. There are as many points as many slicing planes (bins) you asked the algorithm to generate.
The algorithm is insensitive to the quality of the input meshes. I have had hard times trying to refine meshes to make them more suitable for other algorithms, so I could definitely appreciate that. No need to think much about the aspect ratios of your triangles whatsoever.
The process of slicing is trivial as it all comes down to intersecting mesh edges and planes. You do not even need any planes as the problem is essentially reduced to one dimension.
Having all intersection points, it’s trivial to select ones that yield the max distance at each axial bin. After connecting those points with the straight line segments using something like BRepBuilderAPI_MakePolygon, you end up with the generatrix profile.
It’s a no-brainer then to call BRepPrimAPI_MakeRevol to build up the swept solid.
The algorithm is not ideal as it cannot stack up several slices in one bin, and that makes it incapable of capturing planar turn faces. Still, the approximation is quite good, and one might want to add small proximity offset to the profile for safe collision tests (if that’s what you’re looking for).
To conclude, this approach is easy to implement, it’s fast and reliable, and it tolerates bad meshes as the input.
We would normally want to compute some global properties of a swept body, e.g. its volume. The easiest way would be constructing the explicit B-rep geometry of the corresponding solid of revolution and running something like BRepGProp on it. At the same time, unless we really need to have the explicit boundaries of the swept body (e.g. for collision tests), there’s no need to dive into geometric computations anymore. Given that we can represent our profile as a scalar function, it’s not difficult to integrate it along the OX axis.
One easy way to turn our profile into a function that could undergo integration is by representing it as a 1-degree spline curve. For example like this:
PolylineAsSpline(const std::vector<gp_XYZ>& trace,
const double minKnot,
const double maxKnot)
TColgp_Array1OfPnt poles( 1, (int) trace.size() );
for ( size_t k = 0; k < trace.size(); ++k )
poles( int(k + 1) ) = gp_Pnt( trace[k] );
const int n = poles.Upper() - 1;
const int p = 1;
const int m = n + p + 1;
const int k = m + 1 - (p + 1)*2;
const double span = (maxKnot - minKnot) / (k + 1);
TColStd_Array1OfReal knots(1, k + 2);
knots(1) = minKnot;
for ( int j = 2; j <= k + 1; ++j )
knots(j) = knots(j-1) + span;
knots(k + 2) = maxKnot;
TColStd_Array1OfInteger mults(1, k + 2);
mults(1) = 2;
for ( int j = 2; j <= k + 1; ++j )
mults(j) = 1;
mults(k + 2) = 2;
return new Geom_BSplineCurve(poles, knots, mults, 1);
For the integration, one could use something like Gaussian quadratures. I implemented them once and it was working nicely. The only trick is to perform integration by knot spans for a better accuracy:
// Evaluate using Gaussian integration in knot intervals.
const TColStd_Array1OfReal& knots = profileCurve->Knots();
const int nGaussPts = 6;
double precGaussVal = 0;
for ( int k = knots.Lower(); k < knots.Upper(); ++k )
intervGaussVal = core_Integral::gauss::Compute(&fx,
precGaussVal += intervGaussVal;
double SweptVolume = precGaussVal;
Here ‘fx’ is the function object that returns the squared radius of rotation for the given value of ‘x’ multiplied by Pi. As a result, we avoid using any topological primitives and derive what we need from the profile polyline.
The main downside of the outlined MTS algorithm is its incapability of capturing sharp corners of a model. That might be critical for applications where high accuracy is a must. On the other hand, Watkins algorithm is easy to implement and it’s pretty darn fast. Even for complex models, it takes a fraction of a second. Another shiny property of the algorithm is its robustness. Since the logic is straightforward, there’s almost nothing to debug and you can get to the stable version real quick.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224645089.3/warc/CC-MAIN-20230530032334-20230530062334-00292.warc.gz
|
CC-MAIN-2023-23
| 10,613
| 57
|
https://www.toysdesk.com/2007/09/infrarecorder-open-fast-small/
|
code
|
If you are a Windows user and you want a simple but powerful burning software, you have two choices:
pirate some big and well know burning “rom” 🙂 or download Infrarecorder.
Infrarecorder it’s open source, really small (the installer is 2.4 mb!), it comes with a clean and well know user interface, and it can do all the usual stuff like reading and recording ISO and BIN/CUE format, erase rewritable media, copy on the fly, etc…
My burning choice under Linux is K3B, but from now, under Windows I will use this great piece of software.
Thanks to everyone involved into InfraRecorder development.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100575.30/warc/CC-MAIN-20231206000253-20231206030253-00416.warc.gz
|
CC-MAIN-2023-50
| 607
| 5
|
https://www.verypdf.com/wordpress/201201/convert-wmf-to-pdf-and-encrypt-pdf-20059.html
|
code
|
Because simple passwords are easy to be decrypted, sometimes you may need to encrypt PDF when you set password to protect your PDF files. VeryPDF HTML Converter Command Line, a stand-aloneapplication, provides multiple options to encrypt PDF, set password, set keylength, etc. to help you prevent your PDF files being printed, copied, and modified by unauthorized users.
VeryPDF HTML Converter Command Line does not require Adobe environment and any third party application. It supports batch conversion. So you can easily and quickly convert WMF files to PDF files and encrypt PDF files in batch via a single.
Please download VeryPDF HTML Converter Command Line, and install it on your computer. Then, you can follow the steps below to convert WMF to PDF and encrypt PDF.
Step 1. open the command prompt window
In Windows XP, you can take four little steps to open the command prompt window: click Start, > select Run from the Start menu, > type “cmd” > click OK.
Step 2. type a
Please type a command line in the command prompt window, according to the following basic usage and some related options:
- Usage: htmltools [options] < WMF file > [<PDF file>]
- -ownerpwd <string> : Set 'owner password' to PDF file
- -keylen <int> : Key length (40 or 128 bit)
- -keylen 0: 40 bit RC4 encryption (Acrobat 3 or higher)
- -keylen 1: 128 bit RC4 encryption (Acrobat 5 or higher)
- -keylen 2: 128 bit RC4 encryption (Acrobat 6 or higher)
- -encryption <int> : Restrictions
- -encryption 0: Encrypt the file only
- -encryption 3900: Deny anything
- -encryption 4: Deny printing
- -encryption 8: Deny modification of contents
- -encryption 16: Deny copying of contents
- -encryption 32: No commenting
- ===128 bit encryption only -> ignored if 40 bit encryption is used
- -encryption 256: Deny FillInFormFields
- -encryption 512: Deny ExtractObj
- -encryption 1024: Deny Assemble
- -encryption 2048: Disable high res. printing
- -encryption 4096: Do not encrypt metadata
Example 1. d:\htmltools\htmltools.exe -ownerpwd "skite” -keylen 2 -encryption 16 c:\in.wmf d:\out.pdf
- d:\htmltools\htmltools.exe stands for the executable file.
- -ownerpwd "skite” is the option that can be used to set an owner password.
- -keylen 2 is an option that can be used to set key length as 128 bit RC4 encryption.
- -encryption 16 is the option that can encrypt PDF and preventing the contents from being copied. All the three options must appear when you encrypt PDF.
- c:\in.wmf is the directory of the input file.
- d:\out.pdf is the directory of the output file.
If you want to convert WMF to PDF and encrypt PDF in batch, you can use * to represent all the files in a folder or on a disk.
Example 2. d:\htmltools\htmltools.exe -ownerpwd "skite” -keylen 2 -encryption 16 c:\*.wmf d:\*.pdf
Now please type a command line, depending on your own computer, and press Enter to start conversion from WMF to PDF. You can free use the free trial version for 50 times, if you want to buy VeryPDF HTML Converter Command Line, please click full version.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506429.78/warc/CC-MAIN-20230922234442-20230923024442-00152.warc.gz
|
CC-MAIN-2023-40
| 3,031
| 36
|
http://superuser.com/questions/tagged/computer-parts+battery
|
code
|
Meta Super User
to customize your list.
more stack exchange communities
Start here for a quick overview of the site
Detailed answers to any questions you might have
Discuss the workings and policies of this site
What's the best UPS unit for a home PC
I recently moved into a new house, and it seems that every time there's an electrical storm, the power will blink on and off quite frequently. I was thinking of getting a UPS (Uninterruptible power ...
Jul 29 '09 at 17:15
newest computer-parts battery questions feed
Hot Network Questions
Why we don't use wider interfaces on wide screens
Why should I use a pointer rather than the object itself?
What to do if assignment is against student's religion?
How to shorten this code or make it more efficient?
What is the canonical form of real symmetric 2n\times 2n matrix under unitary congruence?
How to typeset a file path?
KSP: Reuse mystery goo transmit in career mode and multiple science for same canister
I need to replace this button in this circuit board with a relay
Protecting Servers from Dust
You can’t have your cake and eat it too
Why can we still crack snapchat photos in 12 lines of Ruby?
How can I kill puppies without consequences?
Easy-to-learn system that supports one-on-one play
How to locate source code that implemented a certain feture?
What exactly are fractals
How could Rabbi Shimon bar Yochai eat from the carob tree?
Why is "distro", rather than "distri", short for "distribution" in Linux world?
Manyshot and weapon special abilities
Mandelbrot image in every language
Is it usual that a professor never proves a theorem in a class?
If DOS is single-tasking, how was multitasking possible in old version of Windows?
How can I superimpose a summation operator with its following character?
Decompressing multiple files at once
Alphanumeric Hello World
more hot questions
Life / Arts
Culture / Recreation
TeX - LaTeX
Unix & Linux
Ask Different (Apple)
Geographic Information Systems
Science Fiction & Fantasy
Seasoned Advice (cooking)
Personal Finance & Money
English Language & Usage
Mi Yodeya (Judaism)
Cross Validated (stats)
Theoretical Computer Science
Meta Stack Overflow
Stack Overflow Careers
site design / logo © 2014 stack exchange inc; user contributions licensed under
cc by-sa 3.0
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999671521/warc/CC-MAIN-20140305060751-00050-ip-10-183-142-35.ec2.internal.warc.gz
|
CC-MAIN-2014-10
| 2,274
| 53
|
http://www.tomshardware.com/forum/3840-63-general
|
code
|
How can I know if my installed Windows 7 Ultimate x64 did install as x64? I opened Windows Configuration and I see a Windows/System32 but no Windows/System64! The CPU reads x64! At the top of the configuration table I read Windows 7 Ultimate Build 7600 but that is all. Then, what am I missing? What did I do wrong? Anticipated thanks for your cooperation.
Start --> Right click "Computer" --> Properties
System32 is a legacy thing since everything else is based on 32 anyhow (apps etc)
Thank you so much for reply! I indeed saw it there.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106984.52/warc/CC-MAIN-20170820185216-20170820205216-00155.warc.gz
|
CC-MAIN-2017-34
| 538
| 4
|
http://www.coderanch.com/t/530093/XML/Default-TransformerFactory
|
code
|
But now it is complaining about arguments we are passing to the transformer which used to work. What we want to do is just reset this property so that things go back to the way there were before we added the Oracle jar on the classpath (and not need to update any code). Ideas?
In case you ever need this again, I found it with the following program (run this without the Oracle XDK JAR on the classpath)
The defaults are:
I had problems with Oracle's XDK too, so I know how frustrating this can be!
You have to put files in META-INF/services on the classpath that are specifically named like the factory interface names. For example, to override the DocumentBuilderFactory, you need a file called "javax.xml.parsers.DocumentBuilderFactory" and it has to have the text "com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl" in it (no spaces or newlines).
Keep in mind this is ONLY for Java 6 - Java 5 and 7 may be different. I kind of wish this were documented somewhere.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447860.26/warc/CC-MAIN-20151124205407-00188-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 985
| 6
|
https://communities.vmware.com/t5/VMware-Workstation-Player/forward-ports-when-using-NAT/td-p/2823154
|
code
|
We need to forward ports to our vm when using NAT. The only way I know on how to do it is by setting a static ip in C:\ProgramData\VMware\vmnetdhcp.conf and then open ports in C:\ProgramData\VMware\vmnetnat.conf.
The problem is that we use a VM zipped up and ready and distribute it in our office, and need to have a solution to have it already fixed in the zip.
Is it possible to set static ip and forward ports by setting up something in the .vmx file?
NAT networking address space is the matter of the VMware installation itself. What I have done, I have changed the address space of NAT to be the same in all computers, requiring copies (or move) from a set of VMware computers. This can be done in VMware Networking Config - and probably with those files that you mention.
Unfortunately, VMware Upgrade will change the address space to something random again, usually, and you need to redo the above.
Static ip-address is the matter of the VM and its OS. You can set it in Windows (or another OS) using OS tools. It will, of course, remain in ZIP. When your computer ip is standardized and VMware networking is standardized, you only need to do this once - I expect Port Forwarding to be the same ... more over, the setup is the same for every physical computer. When updating VM computer from a new zip, nothing changes and you don't need to do anything for the networking.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362741.28/warc/CC-MAIN-20210301151825-20210301181825-00620.warc.gz
|
CC-MAIN-2021-10
| 1,379
| 6
|
https://tracker.moodle.org/browse/MDL-11160?attachmentViewMode=gallery
|
code
|
Affects Version/s: 1.8, 1.9
Component/s: Database SQL/XMLDB
Environment:The following SQL on the OU's live system returned count = 176574,
SELECT COUNT(*) FROM mdl_role_assignments WHERE timeend > 0 AND timeend < 1189123200;
Affected Branches:MOODLE_18_STABLE, MOODLE_19_STABLE
Fixed Branches:MOODLE_18_STABLE, MOODLE_19_STABLE
Escalated from OU Bug 3791, 'Live cron exhausts memory, before "core" jobs including sync_metacourses'
Cron is consistently failing on
"Removing expired enrolments ...Allowed memory size of 134217728 bytes exhausted..."
The attached patch uses the preferred 'rs_fetch_next_record' call, and only gets required fields, to reduce memory use.
It removes the 'course' table, enrolperiod>0 check and loop, introduced for Bug
MDL-10181 (also MDL-8785) - is this really necessary? If so this SQL join could form the basis.
SELECT ra.roleid, ra.userid, ra.contextid
FROM mdl_course c
INNER JOIN mdl_context cx ON cx.instanceid = c.id
INNER JOIN mdl_role_assignments ra ON ra.contextid = cx.id
WHERE cx.contextlevel = '50'
AND timeend > 0
AND timeend < 1189123200;
--AND c.enrolperiod > 0;
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.87/warc/CC-MAIN-20210510033850-20210510063850-00302.warc.gz
|
CC-MAIN-2021-21
| 1,108
| 20
|
https://lists.debian.org/debian-devel/2010/06/msg00225.html
|
code
|
Re: A lot of pending packages
Petter Reinholdtsen wrote:
> My sponsoring preferences are available from
> <URL: http://people.skolelinux.org/pere/debian-sponsoring.html >. To
> make sure I have direct contact with the prospective package
> maintainer and avoid a backlog of packages I should have sponsored, I
> want to be contacted on IRC about sponsoring. So to me,
> mentors.debian.net is a nice repository to find the source, and
> uploading there is not the last step a future package maintainer need
> to take to get her packages sponsored.
Before I write anything else: I only need to have my Debian accounts
created and I'll be a DD. So, I am kind of seeing things with 2 different
viewpoint at the same time: from my sponsoree and future DD.
I got 2 suggestions to make about sponsoring. These are just raw ideas
that I am sending, I'm not sure if they are good, but I just want to share
what's in my mind. Feel free to comment and explain why I'm wrong.
Maybe we could imagine a kind of survey that the sponsor would write,
to tell how the new maintainer performed with his package, just right
after it has been sponsored. That of course, be some added sponsor's
work, but it could be kept small.
My 2nd suggestion is coming from the Maemo platform (the OS behind
the Nokia n900 that is Debian based). In Maemo, there is a "devel"
repository that includes apps that aren't necessarily in good shape. The
users know that fact when they are adding the repository which contains
packages that are not necessarily as tested, and wont complain.
I wonder if we could have such a repository in Debian, so that new
maintainers would have their packages sent there. We would have to
discuss what would be the rules to get from devel to SID. What I have
in mind could be checks like:
- the maintainer has been responsive for a period of time
- the packages of the maintainer have been in good shape as well
The issue really being the way the maintainer is reacting to issues,
rather than the issues themselves.
The advantage of this system would be that we wouldn't need so much
check to have apps going to devel. We could even think about it as a
big bazaar of ongoing work that would not need checks at all (apart
of course, licensing, that would still need strong checks). This would
prevent people from not being happy about sponsorship in SID.
The devel repository could be said as NOT part of Debian, just like
contrib and non-free.
Now, combine the 2 ideas. If a (new) maintainer has X good sponsor
surveys, then his package(s) would go from the devel repository to
SID automatically (after a DD checks for it manually and agree on
the decision), and he would gain the rights to have his packages
go directly to SID when they get sponsored.
Don't get me wrong, the idea is to have LESS checks on the sponsored
packages, rather than too much, so that we would have a faster
sponsoring process (new maintainers will be happy, sponsors too),
while still maintaining intensive quality checks in SID / testing.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806979.99/warc/CC-MAIN-20171123214752-20171123234752-00222.warc.gz
|
CC-MAIN-2017-47
| 3,012
| 49
|
https://news.ycombinator.com/item?id=27619977
|
code
|
But that was five years ago. I'm pretty sure Photon supports my Windows builds on Linux better than I was ever able to do with the native executables, and at least there's that.
Then the Linux community gets another game. Does it not work on K.I.S.S Linux running sowm as a window manager and an entire custom userspace? Probably not. Does it work using the latest Ubuntu version? Probably.
But the notion that developers have to support every possible Linux configuration out there just seems toxic to the Linux game development effort as a whole.
As long as you're nice about it and don't turn it in to a bland cookie-cutter "corporate" response most people will understand.
As someone who just runs Linux, and occasionally runs some games on it, I'm always a bit annoyed when people report these kind of very specific bugs with old/weird drivers/distros that aren't in the supported platforms list. It turns developers off for completely understandable reasons. It's a shame, because for most people it does usually work.
I run Void Linux, it almost always works, but I'd never report these kind of bugs without testing Ubuntu (or whatever is supported) first.
If I sell on Windows (I do) and Mac (I do) then I have to support a certain range of OS versions and ongoing OS releases - even if that means (for example) I have to figure out how to 'notarise' a Mac executable so that a user doesn't have a big scary Security Warning pop-up. Not ideal, but fine. The challenge with Linux is that I would have to communicate against expectations - that I would have to make it clear that when I 'support Linux' it looks different to the support for Windows or Mac. I do genuinely think that 99% of Linux people get this, it's just the 1% that's maybe less forgiving of different standards.
For me, just personally and selfishly, passing the buck to Photon or Wine is an easier sell for my business.
When your game runs under Steam runtime, the real distribution is (almost) irrelevant - everything in your address space is supplied by the runtime, the things you get from the host system is the kernel/kernel modules and services you talk to via IPC (i.e. X11/Wayland, Pulseaudio).
It solves the problem of what version of what library is installed (if at all, maybe user removed it as "bloat") on the host system. You get known set of binaries that you can test against / coherent SDK target like with Windows or Mac.
Whether this matters is up to the developer. But it's a potential downside.
Gog is quite contend with just Ubuntu being supported; they are not that different.
Not sure whether that is the only reason why some games are not on Gog though; often Mac ports are missing too. It seems more like missing rights for the ports than technical reasons.
I don't know; probably not. But this was the response I got back from the devs of "Expeditions: Conquistador" when I asked if they could release the Linux version on GOG (when I bought it originally I still had a Windows machine).
Valve should just partner with Canonical and release Ubuntu support.
Let the other distros figure out how to get it working there.
Most hardware and software vendors primarily target it.
It might not be hip, but it's what everyone knows.
People would start pretending to be Ubuntu to install games.
In the Linux community, at least unless a project has a toxic developer (there are a few), bug reports are Always Good. They're how we make the software WE use better on the systems that WE use it on. Even if a report isn't fully actionable (e.g. it's a problem with graphics drivers), the report is often helpful because the bug tracker is probably public and we can try to find workarounds, or at least flag the issue for others.
For closed source commercial software, especially cases where a tiny number of developers are working on the code, bug reports are Always Bad. They represent more work, work that you don't want to have to do, because at the end of the day these are people who already bought the game. You've gotten as much out of them as you're going to get out of them. If they're more trouble than they're worth (someone else in this thread claimed 90% of bug reports out of 1% of purchases), then it's obvious you should just ignore them or not port your game to their platform at all. You'd think this attitude would be different for issues that affect a lot of people: a good bug report can help you fix widespread problems that are hurting your players, but actually even this is rare. See the story of this guy fixing a bug causing 6 minute startup times that affected at least thousands of people using reverse engineering, when the developer ignored the problem for years: https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times...
So I think you're right, these are mostly people enthusiastic about a piece of computer software instinctively trying to collaboratively improve it for everyone. But because development is so limited (there's only one person reading bug reports and working on the code), those reports are experienced as frustrating rather than helpful. Worse still, because the software is commercial there may be an unspoken feeling that support is owed for the software because the user paid for it.
This is such a bad attitude. The game is supposed to work correctly without any bugs. People who paid money for the game deserve continued support. It doesn't matter how much time and money the developers have to spend, that's their problem.
If the software is defective, consumers should be entitled to a refund. That ought to motivate companies not to release shoddy work.
This is not real-world software engineering :)
Pretty much any software has bugs; maybe surprisingly to non-programmers, games are especially complex (in primis, architecturally).
In real world, one can realistically talk about, let's say, an acceptable threshold of bugs.
> People who paid money for the game deserve continued support
And this is not real-world (game) business. Whether one likes it or not, there is a per-unit profit, and the corresponding value in terms of support is very limited.
An ideal solution to this is open sourcing games after a certain time (Id Software used to do it), but this is not realistic. I wish it, though!
> That ought to motivate companies not to release shoddy work
One can't really force a company not to do shoddy work. The gaming market is a radically free one, unlike other constrained markets, like internet providers. Customers are actually entitled to have the money refunded, at least on Steam. Gaming journalism actually has been including bugginess in games evaluation for a while, so buyers can decide in an informed fashion.
Though for an indie game it probably isn't crazy to make the code open sourced and then put those users to work for you. That can really help reduce the burden. But of course opens you up for people stealing your software (which let's also be real, happens anyways).
Most people are very helpful and quite understanding that as a sole indie developer, it would be hard to support all the configurations. But occasionally I get angry emails and negative reviews about game not running on Linux.
Given the sales (Linux is 1% of the total sales, Mac is 3%), I would say for an indie developer, it makes more sense to put Linux support on a low priority. It is unfortunate for Linux gaming community but it is what it is.
Also even though Proton has come a long way and has become relatively stable - occassionally there are some strange issues (like Steam Cloud sync fails, etc) here and there. But overall the effort is much lower compared to maintain a separate Linux build.
Right now the flow for the user is 1. See store page 2. Buy 3. Play 4. Hit bug.
This is the moment when they find out that they bought a game that was not in fact supported. That is super frustrating (and possibly legally requires a fix or a refund). If there was a 1.5 step of "This game offers no support for Linux" or "This game offers no support for any distribution except Ubuntu 21.04" then it is much more acceptable, because I accept that detail before purchasing.
https://steamcommunity.com/app/378720/discussions/0/49012573... is their explanation, https://store.steampowered.com/app/378720/Thea_The_Awakening... the store page. Great game btw.
Last I checked the limit was the minimum of "2 hours playtime" and "2 weeks after purchase"
> Steam is quite generous about refunding games, either because you purchased them accidentally or they didn't run correctly or any other reason,
Again, last I read, Steam is quite generous but will probably flag your account if certain patterns emerge (probably through some ML-alchemy).
You can have Linux builds available via Steam without listing Linux support on the store.
Maybe I'm in the minority, but I've never bought a game because it could theoretically run under Wine, while I've bought quite a few games that run on Linux. Very rarely do I buy a game that does not natively run on Linux.
Wine seems like kind of an ugly hack even in the best case scenario. You don't know what performance is going to be like with your hardware, and you usually don't have anyone who's tested the program on your hardware to make sure it runs correctly and doesn't crash, and obviously games are extremely sensitive to these kinds of hardware dependent things. Installing Wine can be gross too, you have to install a bunch of 32 bit libraries on an otherwise clean 64 bit system.
I'd say the appeal of making a native Linux game is that it gets you access to the market of people who will buy a native Linux game. It's true that you already had the market of people who are technically running Linux and playing Windows games via Wine, but presumably there are many people who aren't going to go to that much trouble. (Obviously, there are many cases where supporting Linux isn't financially viable anyway, Wine or native.)
At this point I just buy windows games, and if they don't work under wine, I run them in a VM.
KDE doesn't even work on the latest Ubuntu version - on my hardware, at least.
I have an Ubuntu system with an Intel graphics card, and my system won't boot into KDE unless I remove the old Intel driver that it defaults to, so that it then tries the new Intel driver which actually works. Gnome, Enlightenment, and whatever Ubuntu defaults to, they all work fine regardless, but KDE doesn't.
Note that this is after removing the 'Intel' driver for xorg, which had tons of screen tearing issues, in favor of the apparently "correct" modesetting driver, which is the preferred option for newish Intel cards. Except that at some point I was installing something else which explicitly depended on the Intel driver package...
And then we all ended up working from home and now I don't have to deal with any of that crap. Now I just use xorgxrdp from my Windows machine and it just works.
I can't imagine the gigantic hassle that would be trying to game on Linux in that kind of environment, where the "supported" and "default" options are the wrong choice and you have to manually uninstall things if you want stuff to work properly. No thanks. I love Linux (way, WAY more than Windows), but I'm not going to waste time and energy trying to play games on it.
I do it literally every single day.
The last point of failure is the GlibC (GNU C library), a Linux application, even if fully statically linked against all dependencies or packaged with all shared libraries, may fail to run in another Linux distribution if it is using an older version of the GlibC than the one that the executable was linked against. Therefore, in order to ensure that the application can work in the broadest range of distributions, it is necessary to static link the application against the oldest possible version of GlibC or build the application on the LTS (Long Term Support) version of the supported distribution.
If one does not have enough resources to support Linux, an alternative approach, assuming that is OpenGL is used, may be building the executable for Windows and Wine using MinGW compiler either on Windows or by cross-compiling on Linux using Dockercross-mingw (https://github.com/dockcross/dockcross). At least with Wine, one will not have to deal with Glib-C compatibility issues.
Everyone decided this so we got 3 different "standard" formats... (Snap, Flatpak, and AppImage). See: that XKCD about standards.
And windows has .msi, whatever is used to install windows store packages, NSIS, Inno Setup...
it's a way overblown complaint. I use AppImages built on centos 7 for my own stuff and never heard anyone having issues with it.
On Windows is easier to build self-contained applications as there is no the RPATH issue. All one needs to build a self-contained application that is easier to ship and works everywhere is just add shared libraries at the same directory where is the executable and create a zip archive or a MSI installer. When applications are installed, they are installed in a single folder and files are not scattered across the file systems like binaries in /usr/bin, /usr/local/bin, /libs/, /usr/shared, ..., as in Linux and other unices.
That's like complaining that, say, a program built against cygwin on windows, can't work on a system without the cygwin dll (and all its dependencies also compiled against the cygwin dll).
Flatpack attempts to solve the dependency hell problem by providing standard runtime, a set of libraries, Glib-C, Gtk or Qt with a fixed version which requires the developer to build an application linking against those runtime, which avoid binary compatibility problems and dependency hell. The trouble of Flatpacks is the integration with the desktop.
AppImage attempts to solve the dependency hell by bundling everything into a single launcher executable with a SquashFs file system image payload. The disadvantage of this approach is that it is only possible to use a single executable. AppImage are also not free from GlibC compatibility issues.
A MacOSx-like app bundle and a changing in Linux development culture to build and design applications as self-contained from inception would strength the Linux desktop and reduce the application packaging work duplication that still affects Linux distributions.
By using MUSL it is possible to build fully statically linked applications that works everywhere, however dlopen(), dlclose() calls, that are used for loading shared libraries and plugin at runtime on Unix-like systems, do not work with MUSL.
The bigger problem is that glibc cannot be statically linked properly. It dlopen's some libraries, like for NSS. Maybe games could avoid triggering those cases but musl is definitely better.
While syscalls themselves are backward-compatible (even for faulty behavior), I think I've read somewhere that some parts of DRI are not, but I've lost the source on that.
Alternatively, if more games were free (as in freedom) software, a large community could better sort out some of these issues.
Edit: Though the game would have to have pretty good replay value, I'm not installing another OS for a 10 hour game.
Pretty much all games I play are through Steam’s compatibility layer anyway, and nowadays it’s a very smooth experience.
On which distro and graphics hardware? ;)
I’m not a game dev so I don’t know how much you still actually notice of the distro / hardware once you’re running in Proton or Wine.
This was very noticable, even as just a user. Where on the pages of games you liked you'd routinely discover more smaller titles that looked really cool under the "more like this" section, it's now the exact same set of games recommended for most of them, and it's really really annoying.
The indie games I've played all seem to have little news posts about 1 million copies sold. So... what's the return on an indie game of 2012 indie game quality?
The major issue is OpenGL drivers, they can be a pain on Linux (specially proprietary ones like NVIDIA).
If both monitors have similar configuration (DPI, refresh rate) it is fine (Xrandr fixed many of the issues that X previously had). If the monitors have different configurations it isn't, but this problem will reflect even on the desktop and I am sure that nobody will report a problem that they have on desktop as a "game bug".
> full-screen programs
X doesn't have a concept of full-screen, but anyway, it mostly works fine if you grab the root window (that of course you shouldn't write code manually to do this, your game engine probably will do it for you automatically).
> low-latency input
X latency is fine for most games, and in many cases you will not use X to handle input anyway. Also, if your game really needs low-latency (only a minorit of genres do), you can bypass X.
Anyway, I was not referring to specific X or PulseAudio problems that you will if you put things on container or not. I am just saying that even if steam-runtime doesn't built-in those libraries it is not much of a problem since those APIs are stable.
I just had to fix a crash because my distro's Love2D uses LuaJIT which only supports Lua 5.1, but the game's source contains a bit that requires Lua 5.4. But it was an easy patch (which unfortunately cannot be upstreamed because upstream doesn't want PRs).
For other games, as long as the game provides an Ubuntu version, it'd work for me. I run an Ubuntu Docker container for Steam and other "first-party software" (binary packages directly from the software manufacturer as opposed to distro repos), because when such software says "it supports Linux" it almost always means "it supports Ubuntu".
Witb Proton you can publish a game and deny all reponsibily for Linux support. “Sorry we don't support Linux but we hear it runs great on Proton!”
I'm also deeply curious as to exactly how many indie game developers are writing code that interfaces directly with these low level systems and graphics APIs. In my experience, building cross platform games (I've shipped from Unity, Unreal, Godot, and XNA/MonoGame) is trivial and the framework handles 100% of the complexity of porting. From the sounds of this comment thread, everybody is writing their game in raw shader language and then having to port that to Vulkan or OpenGL.
Now, supporting Linux via Proton, that's where Proton can kill native Linux builds. But I'm not sure how common that is, or will be.
The usual response here (from any vendor, not just an independent game developer) is to say you only support the latest LTS version of Ubuntu/SteamOS with the officially supported drivers there and that's it. You're absolutely right to do that. If you want obscure distros to be able to run your program, you can open source it and let them deal with the packaging/testing/maintenance. The fact that all the OS packages are open source is the only reason all the random distros are even able to exist, so you're already making it difficult for them when you don't do this. No reason to dance around that.
You could, but clearly this developer didn't. You're responding to someone's real life experience with a hypothetical.
Except DRM. Attach DRM or anti-cheat to your project, software that actively doesn't want to run on anything but a specific OS, and the linux community will turn on you.
Ultimately the best way to have fair games is to promote finding players through avenues other than official matchmaking: friends or even just random people on something like discord.
Completely agree with you. The current online gaming model where people play with untrustworthy strangers is stupid and broken. We should be playing with friends. Instead we get invasive rootkits installed in our computers and they don't even fully prevent cheating.
Disclaimer: I'm not a game dev and don't really know what I'm talking about.
If you say this publicly then angry anime avatars will yell at you on Twitter.
If you're willing to listen, I could describe to you the technical reasons why Xorg was abandoned. But I also doubt the answer will please you, because the reality is that the reason it's perceived as being "stable" is because it's not being improved anymore -- if people were still hacking on Xorg instead of Wayland, then your Xorg would be breaking left and right too.
Yes, that's the point of an insult.
> Nobody wants to maintain legacy software for free in their spare time. That's all it is.
This is 100% false, a lot of people do. In fact a language (Free Pascal) and framework (LCL) i am using has a very good track record for preserving backwards compatibility while at the same time continuously improving. I have code i wrote two decades ago that work fine with it and will automatically get the new features introduced just with a recompile.
The same can't be said for, e.g. Gtk: Gtk2 apps not only wont get any new features from Gtk3, but they wont even compile. Same with Gtk4, because making the mistake twice wasn't enough.
> If you're willing to listen, I could describe to you the technical reasons why Xorg was abandoned.
There are no technical reasons, Xorg is code, code can be modified. It is all political reasons at best and people wanting to rewrite stuff they'd rather not bother learning about. As JWZ writes in his CADT page:
<<Fixing bugs isn't fun; going through the bug list isn't fun; but rewriting everything from scratch is fun (because "this time it will be done right", ha ha) and so that's what happens, over and over again.>>
> the reason it's perceived as being "stable" is because it's not being improved anymore -- if people were still hacking on Xorg instead of Wayland, then your Xorg would be breaking left and right too.
Xorg improved all the time over the years going back to the XFree86 days, adding new features consistently without breaking existing code and applications. If it suddenly started breaking now it wouldn't be because it is impossible to not break but because the developers somehow started breaking it.
That's great that people are doing that with FPC, but if they're continually adding new features and removing deprecated things then that's not legacy software. To illustrate further what I mean, probably none of those people can be convinced to work on other old stuff like GTK1 or GTK2, because you're really comparing apples and oranges here. If it were easy or profitable to do that in GTK, somebody would have done it already. Half the reason things changed is because the entire underlying stack changed along with the hardware -- this is not even remotely comparable to something like a self contained compiler for a programming language.
If you disagree, I'd love to hear your proposal on how to keep all the various API changes working in the same codebase without causing it to become overly complex and burdensome, and this is probably over the span of at least 20 system libraries that have all deprecated and/or removed various things over the last 30 years. From my view, part of the problem here is that there were some legitimately bad decisions made back then, that looked reasonable at the time but turned out to be not so great, and nobody really wants to keep paying for those decisions. This is not similar to something like a Pascal implementation where they could just aim for compatibility with an existing compiler from the 1970s and then build on that, these were entirely new APIs at the time and they didn't have their designs fully fleshed out, and in some ways they still don't, because the problem space is still somewhat open-ended.
You're wrong that there are no technical reasons, I assure you the technical reasons are real. Again, I can tell you if you're willing to listen, but if you're going to blanket deny they exist, we can't really have a conversation, so I won't bother typing it out. Let me know if you change your mind. In context your JWZ quote doesn't any make sense either, because Xorg got bug fixes for a very long time. It's being moved away from because it's no longer effective to keep doing that, which is the opposite of what that quote suggests. Please don't let rude and dismissive quotes like that be the guiding line of your discourse, let's actually discuss the real issues.
>If it suddenly started breaking now it wouldn't be because it is impossible to not break but because the developers somehow started breaking it.
This is the root of the misunderstanding -- there is no significant difference between these two. It's at the point where it needs a major refactor or rewrite to make continued work on it worth it, which is going to break things, and at that point writing a new display server makes a whole lot more sense.
They are not removing deprecated things, whenever possible the old things are still around and call the new things. At most they move some stuff to another unit (like a C include or Java import) which is a search+replace into the codebase that takes literally seconds. This happens extremely rarely though, i have non-trivial code that compiles with both a 14 year old release and SVN checkout (which they also try to keep working).
> To illustrate further what I mean, probably none of those people can be convinced to work on other old stuff like GTK1 or GTK2, because you're really comparing apples and oranges here.
Lazarus' currently main backend for Linux is GTK2 exactly because the Gtk developers broke backwards compatibility with GTK3. The GTK3 backend is close to completion though - just in time for the GTK4 to break things again!
There is also a GTK1 backend - it was broken a couple of years ago until someone noticed and fixed it. These are not high priority backends, but they keep them in working condition.
Personally i have contributed to the GTK2 backend since so far GTK2 has the best user experience of all toolkits available on Linux (IMO, of course) by fixing alpha channel support. Since Lazarus has a policy on trying to not introduce unnecessary dependencies, i went to the extra mile to ensure that it works even with very old versions of the library.
> If it were easy or profitable to do that in GTK, somebody would have done it already.
That is the point, it isn't easy or profitable. But something being not easy nor profitable doesn't make it wrong. After all the entire CADT thing is about focusing on the easy stuff because that is fun.
JWZ's page is very short and amusing to read, i recommend reading it.
> If you disagree, I'd love to hear your proposal on how to keep all the various API changes working in the same codebase without causing it to become overly complex and burdensome
By causing it to become "overly complex and burdensome". To avoid the hard work on 2-3 developers this approach pushes that hard work to 20000-3000000 developers.
> and this is probably over the span of at least 20 system libraries that have all deprecated and/or removed various things over the last 30 years.
Which they shouldn't have done.
> From my view, part of the problem here is that there were some legitimately bad decisions made back then, that looked reasonable at the time but turned out to be not so great, and nobody really wants to keep paying for those decisions.
But they should, or at least they should wrap these APIs so that they call new stuff and said new stuff should try - now with the benefit of hindsight - avoid being designed in a way that they'll be so easily broken (which will also help with the maintenance of the wrappers).
> This is not similar to something like a Pascal implementation where they could just aim for compatibility with an existing compiler from the 1970s and then build on that, these were entirely new APIs at the time and they didn't have their designs fully fleshed out, and in some ways they still don't, because the problem space is still somewhat open-ended.
Free Pascal has a ton of burden from design decisions they made in the 80s, 90s, etc including keeping source code compatibility with Delphi and all the boneheaded decisions Borland/Inprise/Embarcadero/CodeGear/whatever did. But also they keep compatibility with standard Pascal, Mac Pascal, Turbo Pascal (which is different from Delphi) and a bunch of other dialects and even specialized dialects like Objective Pascal (for Objective-C interop). They do that by allowing the source code files to switch dialect with dedicated compiler switches and even enable/disable parts.
Yes, this adds a TON of overhead and burden on the compiler writers' side but everyone involved agrees it is a good thing to avoid breaking others' code.
I have a feeling you are greatly underestimating the combined effort that went on FPC and LCL.
> In context your JWZ quote doesn't any make sense either, because Xorg got bug fixes for a very long time.
I was explicit in my original message that Xorg is among the projects that are actually stable so, yes, CADT does not apply to Xorg.
> there is no significant difference between these two.
Of course there is.
> It's at the point where it needs a major refactor or rewrite to make continued work on it worth it
Key words: "worth it". Worth it to whom? People who want to play on shiny toys?
> which is going to break things
They may introduce bugs the refactor but as long as these are acknowledged as bugs and get fixed, then there wouldn't be a problem.
The problem is if they break things intentionally. THOSE are unavoidable. When i can run an X server in Win32 which has a completely different API and display model and the X server barely has any control over the underlying window system, it is absolutely inexcusable to have incompatibility issues in an environment where the X server has complete control over the display, input, etc.
> and at that point writing a new display server makes a whole lot more sense.
Only if you see broken glasses from a bull in a glass shop as unavoidable without questioning why the bull was there in the first place.
EDIT: also in the other message you mentioned that people do not want to maintain legacy software in their spare time. While it isn't correct - a lot of people do - it is also correct that a lot of people do not want to do that because it can be a lot of work. THIS MAKES IT EVEN MORE IMPORTANT FOR THE SOFTWARE DEPENDENCIES TO NOT BREAK so the little time people can afford to put in their software isn't wasted in keeping up with all the breakage their dependencies has introduced just so they can do the same thing in a different way.
To use GTK as an example again, Gtk1 and Gtk4 fundamentally provide the same functionality - sure, Gtk4 has a few more widgets and some fancy CSS support, but fundamentally it is all about placing buttons, labels, input boxes, etc on windows and reacting to events. Yet people who wrote Gtk1 code had to waste time updating it to Gtk2, then again waste time updating it to Gtk3, then again will have to waste time to update it to Gtk4 and all so that they can have buttons and labels and input boxes on windows that people can click on to do stuff.
That is a MUCH worse waste of time because all that time these developers spent to keep up with the breakage could have been spent instead on working on the actual functionality their programs provide.
Instead they not only have to waste time in keeping up with Gtk just so they can do the same stuff, but chances are that due to these changes they are introducing new bugs in their programs.
See XFCE as an example. Or even GIMP, which took ages to switch to GTK3 (again, just in time so they can now waste even more time to switch to GTK4).
That's great that they have the bandwidth to do that, I commend them for it, but other projects don't have the time to maintain and work around deprecated APIs forever.
>Lazarus' currently main backend for Linux is GTK2 exactly because the Gtk developers broke backwards compatibility with GTK3.
If they want to help avoid this in the future for other programs, I would urge them to try to write some kind of compatibility wrapper. It would mostly the same amount of work as doing it upstream, and upstream seems to have no interest in doing it since they would rather focus on helping people get their apps ported to the new way. But this would only work for some things, other things simply can't be provided with any amount of backwards compatibility.
>That is the point, it isn't easy or profitable. But something being not easy nor profitable doesn't make it wrong. After all the entire CADT thing is about focusing on the easy stuff because that is fun.
If you take that approach, you really could say the same thing about these other projects that don't want to upgrade their apps to GTK3/4, etc. Of course they won't do it because it's not fun for them, those projects don't really care about the toolkit, they just want to have some kind of GUI quickly so they can then focus on the rest of their program. At least that's been my experience with them anyway, I sympathize with that but it also conflicts with the need to make changes in the toolkit. So eventually somebody has to compromise somewhere.
>JWZ's page is very short and amusing to read, i recommend reading it.
I've read that page more than a decade ago, as I've said I think it's condescending flame bait that serves to distract from the real technical issues. And it's ableist towards people who do actually suffer from attention deficit disorders. If you want to help fix these issue, please don't refer to it.
>By causing it to become "overly complex and burdensome". To avoid the hard work on 2-3 developers this approach pushes that hard work to 20000-3000000 developers.
I'm sorry I really don't understand what you're saying here. The 20000-300000 developers should easily be able to join together and use their numbers to come up with a solution that is much better for them, no?
>Which they shouldn't have done.
I would urge you to try to maintain all those system libraries for a few years, and then revisit this statement and see how you feel about it after that.
>But they should, or at least they should wrap these APIs so that they call new stuff
Somebody interested in this can just build this wrapper separately, there's no reason it needs to live in the same repo as the new version.
>I have a feeling you are greatly underestimating the combined effort that went on FPC and LCL.
Not quite: my point is to illustrate that the samme amount of work needs to be done in other projects if you want that level of backwards compatibility.
>Key words: "worth it". Worth it to whom? People who want to play on shiny toys?
If you want to describe new features, improved performance, security fixes, etc, as "shiny new toys" then yes, I guess you could say that. I'm not sure what the distinction here is because before you said you wanted these shiny new features?
>The problem is if they break things intentionally. THOSE are unavoidable.
That's also the point I'm getting at: Xorg was at a point where they were going to have to break things intentionally, because some of those APIs are actively causing security issues and cannot be fixed without unavoidable breakage. The apps have to move to a new API if they want this to be fixed, there is no way around it. Any rootless X server (such as the one you used on Windows) will also cause some apps to not work in subtle ways, compatibility is not perfect there either, and Xwayland is basically built with the same design constraints.
>Or even GIMP, which took ages to switch to GTK3 (again, just in time so they can now waste even more time to switch to GTK4).
Depending on your project, porting to GTK4 won't be a waste of time. The rendering model has changed entirely and is now mostly hardware accelerated, so you may see major performance improvements on e.g. high DPI displays. But this is not something that can be provided by a wrapper, to get the major benefits out of it, the apps have to rewrite their widgets to use the scene graph instead of using old-style immediate mode drawing. There would be little benefit if you didn't do that and continued to use GTK2-style drawing. For me at least, that's why I think it's mostly a bad idea to try to make a complete compatibility layer. Maybe it would work for some widgets but apps really need to do a real port if they want the major benefits.
That is the thing, Lazarus and LCL are almost entirely made by volunteer developers working on it in their free time and yet they manage to not break things unlike other projects that have corporate backing and fulltime developers.
It isn't a matter of bandwidth, it is a matter of caring about the work and time other people have spent on their platform.
> If they want to help avoid this in the future for other programs, I would urge them to try to write some kind of compatibility wrapper.
That would be pointless, LCL itself is already a compatibility layer for GUI applications (LCL is primarily a GUI toolkit) and Gtk2 is just one of the several backends. If they had to write Gtk3 support might as well do the Gtk3 backend anyway (which is what they did, Gtk3 work is already in progress, it just isn't as stable as the Gtk2 backend).
My point was that they wouldn't have to waste time on the Gtk3 backend and could focus on other things if Gtk3 didn't break backwards compatibility. Instead they'd add support for the new stuff Gtk3 introduced and use their limited time on more important things.
> It would mostly the same amount of work as doing it upstream, and upstream seems to have no interest in doing it since they would rather focus on helping people get their apps ported to the new way.
If upstream didn't break backwards compatibility with Gtk3 they wouldn't have to focus that either though and everyone would be spending their development time on what their applications are all about instead of keeping up with their depencencies' breakage.
> But this would only work for some things, other things simply can't be provided with any amount of backwards compatibility.
Which only happens because the upstream developers broke backwards compatibility.
> If you take that approach, you really could say the same thing about these other projects that don't want to upgrade their apps to GTK3/4, etc. Of course they won't do it because it's not fun for them, those projects don't really care about the toolkit, *they just want to have some kind of GUI quickly so they can then focus on the rest of their program*.
But that is exactly the issue here, applications aren't using Gtk (or whatever) because they love Gtk itself as an entity, they do it because Gtk provides something - a GUI library - that they want so they wont have to make their own and instead can focus on the stuff that actually matters: their application's functionality. It makes absolutely perfect sense that they wont want to waste time (especially if they are not working on their application full time) to keep up with Gtk's breakage.
Libraries in general are a means to an end, not the end in themselves.
Having a library stop being compatible with its previous versions means that a developer has to stop working on the application they are working on (the stuff that matters) to waste time on something they initially picked up so they can save time - so it makes sense to try and avoid that.
> At least that's been my experience with them anyway, I sympathize with that but it also conflicts with the need to make changes in the toolkit. So eventually somebody has to compromise somewhere.
Changes can be made in the toolkit without breaking existing applications. They may not look as pretty as if you break things and rebuild them, but at the same time you keep existing code working, existing applications running, existing knowledge valid and help make a more reliable platform for both developers (who can rely on your platform to help them instead of wasting their time) and users (who can rely on your platform to have their applications working even if the developers abandon the applications).
It is even good for keeping open source applications - it makes it easier for new developers to pick up some abandoned code and help keep it working. As an example some years ago i got MidasWWW working:
...the codebase of which was at the time almost 25 years old. Yet because Motif didn't change its API, i barely had to touch the UI. It took me an hour or two (i do not remember, it wasn't much) to get it working and the vast majority of the changes i had to do were some old C-isms and 32bit assumptions that modern GCC on a 64bit machine complained about. In fact the only UI-related changes i had to make were because the Motif version the browser was written for had some incompatible changes from the "base" Motif at the time (ie. it wasn't exactly Motif's fault but whoever distributed their modified version).
> I'm sorry I really don't understand what you're saying here. The 20000-300000 developers should easily be able to join together and use their numbers to come up with a solution that is much better for them, no?
No because they all work on different projects.
What i mean is simple: if Gtk (or any other library that breaks backwards compatibility, while Gtk is a popular example that causes a ton of applications to waste time keeping up with it, it is certainly not the only case - SDL1.2 to SDL2 was another case, though at least AFAIK there is now a drop-in SDL1.2 replacement wrapper that calls SDL2) makes a breaking change because one or two of their developers wanted to make their life a bit easier, that breaking change will have a ripple effect to every single project that relies on Gtk and all the developers who work on those projects. One popular library having one breaking change, even if it was done with good intentions, by one developer can cause thousands of other developers on thousands of other projects to have to deal with it - and people do not work synchronously, this could take time.
> Somebody interested in this can just build this wrapper separately, there's no reason it needs to live in the same repo as the new version.
There are two reasons: 1. to dissuade the main developers from breaking things because, as you imply, it'd be additional work and they'd "feel" the effect of their breakage and 2. to make it much easier to keep up with any changes, if necessary.
What you describe is really having others run behind upstream developers to pick up their breakage so that the upstream developers wont have to care about breaking stuff, when what i describe is upstream developers not break stuff in the first place.
> If you want to describe new features, improved performance, security fixes, etc, as "shiny new toys" then yes, I guess you could say that. I'm not sure what the distinction here is because before you said you wanted these shiny new features?
You do not need to break backwards compatibility to provide those though.
> Xorg was at a point where they were going to have to break things intentionally, because some of those APIs are actively causing security issues and cannot be fixed without unavoidable breakage. The apps have to move to a new API if they want this to be fixed, there is no way around it.
Xorg's security issues have been greatly overstated. The server already has functionality to deny any access from untrusted sources (ie. pretend they are the only application running) so you could do that with applications you do not want to trust (or request they not be trusted, e.g. browsers), but even beyond that there are other ways to improve its security - even down to a sledgehammer-like approach of running separate nested instances. All the effort that went towards reimplementing from scratch (and doing all sorts of mistakes on their own) a display server with Wayland could have went towards improving Xorg instead and not break an already tiny desktop ecosystem.
> The rendering model has changed entirely and is now mostly hardware accelerated, so you may see major performance improvements on e.g. high DPI displays.
HiDPI shouldn't matter much for performance unless the previous implementation was done on the CPU, but that has nothing to do with the rendering model.
Immediate graphics APIs can still be batched and in fact this is what, e.g. most OpenGL implementations did for years - when you request to draw a triangle, the implementation wont draw that triangle immediately, but keep the request in case more triangles and other commands come later. As long as what you "perceive" is the same, it doesn't matter if you perform an immediate mode call "immediately" or keep it around for later use. This can be an issue with single-buffered output, so you'd need to be able to do that immediate output too, but most applications tend to do double-buffered output anyway. And at the end of the day, you still perform draw calls even with a scene graph, you just have better ways for batching those calls.
Note that i'm not against the scene graph approach (though it does need to have some "way around"), just saying that you can optimize immediate mode APIs a lot while preserving them.
But there is also another way: have both. Widgets can opt-in the new approach if they need the extra performance (after all not everything will need it) which will allow applications to keep working while converting to the new approach piecemeal. Old widgets will simply have a "canvas" scene graph node created for them where they can draw using the old API and new/converted widgets will use the pure scene graph.
Yes, this can introduce issues, but again, bugs can be fixed. And for a library with the popularity of Gtk (note that i do not specifically refer to Gtk here, it'd be the same for Qt or any other UI library that wanted to switch from immediate mode to scene graph without breaking compatibility) it wont be hard to find programs to test this against.
There are ways to solve this, and other things, if the developers care about not breaking backwards compatibility.
If you need other support libraries that don't interface with the system just ship them yourself. Or let Valve do that for you with the Steam Linux Runtime.
Shipping libraries only helps if whatever these libraries rely on is also stable and the functionality they provide is still available. After all a library might still be using a stable ABI so all it can tell you is that OSS is not supported - which while technically correct and wont cause the program to crash (assuming it can handle not having sound) isn't very useful in practice.
Nowadays I don't care, Windows, macOS and mobile OSes FTW.
If by "developers" you mean the ones working on the unity engine or Nvidia proprietary graphics drivers then you're right, but in my experience there are a number of problems and pitfalls further down the stack which game developers can't reasonably be expected to mitigate.
The only problem I ever had was in Wasteland 2 where the second part of the game there was some bug with the fog on the world map with Intel drivers. Setting some obscure environment variable fixed that.
There is a 60-80 FPS difference for me in CS:GO between Linux and Windows with AMD graphics.
That's not how tearing works.
If you want to sell your game, the smart money is in putting all your resources into the Windows version.
The problem was that Kylix required very specific kernel version (I think it also required some kernel module), so it mostly did not work out of the box and people got discouraged.
The fact that Borland failed to advertise it correctly and never put more effort into making this tool better is another story. Those were times of Borland identity change from RAD tool vendor to super-enterprise corpo Inprise with some crazily expensive ALM tools that were competing with the likes like Telelogic Doors (now IBM brand), etc.
I'm curious as to what kind of game engine you're using where targeting Linux isn't as simple as choosing it in a dropdown menu as well, most modern engines support that very well.
I think it's unreasonable for any software developer to release a product and expect no bug reports to come back at them, but it still doesn't mean they have to tackle everything.
Why not just tell them that? Is it really better to give up those dollars because someone is using a setup you don't support?
However, those dollars you seem anxious I don't give up still didn't really cover the time investment of dealing with Linux - not just the support requests, but getting the build environment setup and performing the testing and all of that.
For reference, the game in question did ~75% of units sold on Windows, ~25% on Mac, and some fraction of a percent on Linux. If I hadn't of released on Ubuntu I would have probably lost less $1000, gross.
It simply isn't possible that there exists a technophile out there patient enough to set up such a non-Ubuntu rig, yet cave-dwelling enough not to thank their deity for the simple fact that any graphics-hungry software turns out to run at all without crashing.
Barring a copy of the original email and video testimony from the sender, it's more reasonable to believe this was someone trolling you (or perhaps even a team of someones if you received more than one such email).
Maybe actually clarifying where the community lives, like WineHQ and ProtonDB do for running Windows games on Linux, would be a good start to help reduce devs having to deal with this sort of thing.
The native Linux version of War Thunder crashes on launch for me, but the Windows version through Proton runs perfectly.
It is a pity though because I suspect that the vast majority of these requests that you had shouldn't have been directed at you but to others, ie linux distributions or specific projects (mesa/etc) but there is no one to triage and direct support requests.
It's a Linux/Windows compatibility layer from Steam. It's pretty great!
A lot of the incompatability between Linux/Windows in my experience has actually been from the Anti-Cheat systems. Apex Legends and Intruder being examples that come to mind.
There are still games that have problems, but Valve and the wine devs and others are knocking those down one by one. So those 15 people that wanted to switch to linux but couldn't because it couldn't run their games can now do so :)
If you are being overwhelmed by bug reports from Linux users the Solution is to let those users triage, categorize and maybe even fix issues amongst themselves by having a publicly acessible tracker. Just like with forums for your game, you might even find people that will moderate those bug trackers for you. Valve realized this early in the Steam for Linux beta and have been using GitHub issues for all of their Linux ports.
As for the differences between Linux distributions, I think the concerns are greatly overblown. The biggest difference between Desktop Linux distributions boils down in the versions of various libraries that they ship. For most of those you don't need to care at all and should ship your own version (or use the Steam Linux Runtime ). Base system libraries (glibc, OpenGL, Vulkan, audio) that you can't ship (because they contain hardware specific code that needs to get updates even after your game is EOL, or for other reasons) tend to provide strong backwards compatibility so you only need to target an old enough version to cover all the Linux distributions you want to support. A complicated one is the C++ standard library since some graphics drivers will depend on that - I recomment statically linkingyour own version and not exporting/importing any C++ symbols in your program.
I agree with others here that it is fine to only guarantee support for a limited set of Linux distributions (e.g. current Ubuntu LTS). However you should not consider reports from other distributions as a nuisance but rather as an early warning system or "linter" that will let you know about potential problems that users on your supported distribution (or even your users on other operating systems) may encounter in the future.
Next you can have various windowing systems, window managers and audio systems (even on one Distribution). Just ignore those: Don't interface directly with Xlib or pulseaudio but instead use a proven abstraction that takes care of the different quirks for you: SDL. That is, assuming you are not already using an engine with mature Linux support. Even if there is a quirk not handled in SDL, your users now are empowered to debug SDL themselves and fix the issue there, benefitting everyone. SDL will also make it easier to support future systems: if you never talk to Xlib and GLX directly, SDL can give you Wayland support for free.
Finally you have drivers. This isn't really much different than under Windows. Like with Linux distributions, issues with one driver often point towards things that just happen to work correctly in another vendor's driver but could break in the future. Having testing on more drivers is a good thing. Compared to Windows however, there is one big advantage: With the exception of Nvidia (and the proprietary AMD driver, but no need to care about that one) you (and savy users) have full source access to those which makes debugging some issues a lot more feasible. But further than that, they are also developed in the open with a public bug tracker which gives you direct access to the developers. You can even chat with them on IRC if you like - just make sure that you are not wasting their time any more than you consider your users are wasting your time by reporting bugs.
I have also seen many concerns in this thread that bad reviews from Linux users will tarnish their score. First, realize that Steam reviews are always relative compared to expectations. If you manage those expectations, you can limit negative reviews - that goes for Linux users just as for anyone else. But Linux users can also help your game by recommending it to others. While the same is true for Windows users, those initial Linux users are easier to reach because there is less market saturation (especially in some genres). Using just the raw Linux sales percentage does not necessarily give you a full view of what sales you have gained by releasing a Linux port. To be fair, there will also be Linux customers that would have bought the Windows version, but either way % sales and revenue impact is not a 1:1 relation.
In conclusion, I think the main problem Windows developers face when targetting Linux are not technical issues (which there are of course) but one of cultural differences. Once you overcome those and learn how the Linux ecosystem works, you can use it to your advantage.
Mac users were effectively the most expensive because his team was (then) spending a lot of time porting their graphics code to Metal.
Linux users were the least expensive because they tended to be sophisticated users who were accustomed to solving their own problems. He cited a particular customer who he said had a solid track record of finding graphical glitches in the game, then opening bugs against Intel GPU drivers and getting them fixed.
Windows users were somewhere in between.
Of course we didn't discuss the opportunity cost of supporting Linux (financially probably not worth it), I'm not sure how much his view was a function of (maybe) not having to personally answer support requests, or whether his experience could be generalized beyond his particular customer demographic, but I learned quite a bit from his response.
If I ever ship my own game I hope to support Linux not because I think it's the right financial move, but because I think offering cross-platform compatibility is just part of being a good digital citizen. A lot of us lived through a time where Windows was about the only game in town, and I don't want to ever go back there. (Plus there's a selfish element: I develop on Linux, so I want to play on Linux!)
MINGW covered the Windows build, clang/osxcross for the MacOS build, and plain old gcc for Linux. It's all oldschool autotools+pkg-config dances for the cross-compilation. Plain C and SDL2+OpenGL under the hood, no engine.
It's nice being able to do it all from my preferred GNU/Linux environment, and I was able to at least smoke test the windows builds successfully via WINE. The main shortcoming currently is there's no MacOS WINE-equivalent that's mature enough to run a graphical GPU-accelerated video game AFAIK.
It's nothing to write home about as it was mostly just an experiment to learn some OpenGL, evaluate my ability to ship something GPU-accelerated for the big three desktop OSes built entirely from GNU/Linux, better understand the shortcomings of myself and my lone collaborator when it comes to creating games, all while gaining visibility into the Steam platform and how much exposure one could expect from simply shipping a title on their store without any advertising.
So it's not exactly a fun or good game... as that didn't even make it into the list of priorities. I just kept the scope very small to ensure it could be shipped as a side project with some semblance of polish.
I don't think anything I have to contribute on the subject of processes or approaches should be considered particularly valuable since it's not really a successful game by any relevant measure.
It also feels like desktop operating systems are becoming so hostile towards running arbitrary native programs that unless you're shipping some AAA title pushing all the hardware limits of performance, it might not make sense to bother shipping native executables anymore. For individual indie devs producing small titles, the web might make much more sense, webgl/wasm/webgpu avoids all this untrusted executable friction and modern computers are fast enough to make it work. It's unclear to me how this dovetails with distribution/discovery and earning money via established platforms like Steam though, there's some dust that needs to settle here from what I can see.
I am looking to make an RTS 2D game on a global map, similar to an old game called Red Storm Rising - https://www.myabandonware.com/media/screenshots/r/red-storm-...
I am a Java/C++ dev
For example: if you hold a reference to object in your script, and the object is removed from the scene, the engine can reuse the address for a new object, but it will result in your script holding a reference to the wrong object.
I believe its the object pooling system not talking to the scrips. This bug is a few years old and I believe it wont be fixed till v4.
Most people don't seem to encounter it, and I worked around it fine buy just making sure I manually null out any references when they leave the scene.
The real WTF which made me finally say this engine is not for me is that they changed the behavior so that it won't happen in Debug, but will still happen in Release. Different behavior for Debug and Release is an even bigger bug!
Its such a rare and sneaky bug, and when it starts happening in your release you can't even debug it!
Small update: I reported the issue on Github in September 2019. From reading the comments it sounds like the inconstant behavior was only in from 3.2.2 to 3.2.3 version (~6 months) then fixed in 3.3. However a user is reporting that the issue is still happening in 3.3 as recently as May this year.
I'm also a Java dev, and have dabbled in making simple "hello world" types examples for different game engines, and Godot was the first one that just clicked right away. Beyond that, I was able to stick to it, and was able to fully publish a game for the first time! (Puck Fantasy: https://www.lowkeyart.com/puckfantasy)
Setting expectations though, if you are expecting GDScript (the scripting language it uses) to be as full featured as Java (or C++), you'll be left wanting. It took some getting used to, to understand the limitations of the language, and adapt accordingly. After moving forward from that mental block, things have been even smoother. And if you really want it, there is C#/mono support, though I recommend your first project to be with GDScript, since it integrates very well with the editor, and creates a smooth learning experience.
I did a toy implementation here: http://github.com/eamonnmr/openlockstep
I haven't rebuilt that in Godot yet, but I will eventually. Godot's workflow is well worth using it's bespoke language.
I understand mid/big studios don't want to give away a percentage to Epic (over 1mln) - which translates to "learn unity to land a job" - but for indies unlikely to reach 1 million in revenue, Unreal is basically free technology from the future.
A final note is that you can also fairly easily develop games for godot with C++ using gdnative, though you might be better off using gdscript, even though it means learning a new syntax.
Also, iirc Xbox support is available via UWP in the main branch.
Which is a shame. KDE's Plasma is an absolute shoe-in for me coming from Windows daily-driver place. I can't imagine any distribution being able to inject its UI into the kernel downstream from Linus. It sounds stupid to be so turned off for such a simple reason but I often would tab out of games to jump to Discord, Spotify, or Terminal and it would be easily 10+ seconds of waiting for things to render.
Does anybody know what the future looks like for this problem? Will there be some reimagining of desktop rendering in the near future on Linux? Or maybe I misunderstand the problem?
Could also try another Window and/or display manager.
At home I have set up multi seat, meaning multiple graphics (GFX)/sound cards, separate screen keyboard and mouse, connected to the same PC. The only problem is that one of the GFX cards goes to sleep when inactive for a few hours, so I have to restart the display manager to wake it up, but the performance is excellent, we play games simultaneously, watch Youtube or what not - without any performance issues, on a several years old PC.
Wow, this feels like a blast from the past, the bad old days of people pushing Linux Desktop and blaming all issues on the user or hardware.
You are recommending an obscure distribution that is unlikely to be supported by anyone. You are claiming to know that their computer is somehow unable to run 'Linux' in such a way that... Alt tabbing takes a long time to re-render the system UI. You are entirely ignoring their own research into the topic with a 'works for me!'. You don't even have the courtesy to claim that you've tried their exact use case to say that it works for you, you just claim a generic 'it works better than Windows for many of us' (while of course ignoring that the Linux gaming community is two orders of magnitude smaller than the Windows one, typically because Linux and Games are not good friends, for many reasons).
Cynically: is this simply a PR stunt by Godot team?
Seems like "on Linux" is appropriate (Denis is the lead programmer.)
I can’t wait for webgpu and wasm to be more mature. You’ll be able to truly write a game once and release it literally everywhere. Like what Java was supposed to be. I think it’s going to be a nice little golden age for games and other software. But the last time I checked webgpu wasn’t ready yet.
So the browser might have some unavoidable overhead, but for most games it probably won't even matter, and others using the wgpu API can target native desktop platforms.
A voxel game, Veloren, recently migrated to wgpu and wrote up some of their experiences here:
So the game that runs without problems on your development machine, can have all sorts of issues on the client browser that you cannot work around like on native, because they are caused by how the browser is blacklisting the user's hardware or drivers.
Naturally random joe user has no idea what is going on, and will dimiss your game as crap because it is running at e.g. 3 FPS.
Then even if everything is supported in hardware, as OpenGL ES 3.0 subset (defined in 2011), no matter how good the GPU is, there is only so much you can do.
Maybe the browser versions won't be too great, but I'm more optimistic about the WebGPU libraries which allow you to target the desktop.
There are browser flags to disable black listing, which is something that regular Joe/Jane has no idea whatsoever that they exist.
Besides, you can head off to webgpu.io and follow up on meeting minutes.
Even better, attend the upcoming WebGL/WebGPU meeting from Khronos (registration currently open) and pose that question if you prefer to hear the same from the browser vendors themselves.
Have a nice joyful day.
The overhead is low and modern computers are very fast. The budget will be very healthy indeed.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585246.50/warc/CC-MAIN-20211019074128-20211019104128-00360.warc.gz
|
CC-MAIN-2021-43
| 64,598
| 272
|
http://m0n0.ch/wall/list-dev/showmsg.php?id=13/72
|
code
|
Dinesh Nair <dinesh at alphaque dot com> has been working hard to produce a
FreeBSD 6.0-based alpha version of m0n0wall so that we can better
evaluate its features, performance and implications on future
development. Thanks, Dinesh!
The alpha version is called 1.3a1 and is available for generic-pc and
Just to make it clear: this is an *ALPHA* version, and it is only
intended for use by people with some knowledge about FreeBSD. Certain
things don't work yet, for example DHCP on WAN. After booting, you'll
end up in a shell - type "/etc/rc.initial" to get the console setup
Also, the availability of this image doesn't mean that anything has
been decided in favor of FreeBSD 6.0. We just need prototypes to do
valid comparisons, and if you would like to build prototype m0n0wall
images with another base operating system, you're most welcome to do
that. Also, 1.3a1 uses ipfilter 4 - pf would be another (some might
argue the better) option under FreeBSD 6.0.
I've done some very very cursory throughput testing on a net4801 -
WAN -> LAN throughput with a single TCP connection (iperf) is about
20 Mbps (with 1.2 it's 38 Mbps in the same configuration). Hope we
can still tweak that.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425766.58/warc/CC-MAIN-20170726042247-20170726062247-00525.warc.gz
|
CC-MAIN-2017-30
| 1,186
| 19
|
https://www.wafrat.com/graphics-cards-comparison/
|
code
|
In the past few weeks, I've regained interest in gaming, and I got curious about laptops vs desktop graphics cards. So I made this chart.
Initially, I just googled a few GPU models and ended up using userbenchmark.com. Here's what the page looks like for my old GTX 970 https://gpu.userbenchmark.com/Nvidia-GTX-970/Rating/2577. The website will tell you the fps in certain games depending on the resolution and the details settings, but you won't get many data points. Also, games tested are different from one graphics card to another.
So I used the average bench percentage score instead.
There are other alternatives to userbenchmark, and I haven't tried them. For science, it'd be interesting to redraw the graph based on different benchmark websites to see how they compare.
I tried using Google Charts, but I was not able to plot everything I wanted to plot. Here's what I wanted to do:
- Plotting graphics cards as dots based on their release date and their performance.
- Link cards in the same price range with a line.
- Distinguish desktop and laptop GPUs.
- Add game requirements as horizontal power thresholds.
Google Charts can do the first, but it can't even put the card's name near the dot.
So instead I used Vega. Vega is an open-source software that can generate a chart based on JSON configuration. You feed it data as JSON, CSV or TSV, then you configure how to render the data.
Putting it together
I made a Google Sheet that contained all the data I wanted to plot, then exported it to TSV.
Then I published it as a gist:
Finally I wrote the plot config in the Vega online editor.
I also saved that configuration as a gist, so you can load that gist from the vega editor and see for yourself: GPU cards comparison.json.
From this chart we can get a few insights:
- Integrated graphics processors (IGP) are still useless, even on the latest 11th generation Intel processors. They keep marketing it as "twice as fast as the previous year", but in terms of absolute power, it's less powerful than an entry-level laptop GPU from 2014.
- Laptop GPUs are not bad at all. Overall, a laptop GPU of a certain class is as powerful as a desktop GPU of the same class 1 or 2 generations prior (ie 3-4 years prior). For example the 2021 laptop RTX 3060 is as powerful as a desktop RTX 2060 from 2018. The 2021 laptop RTX 3080 is as powerful as a desktop RTX 1080-Ti from 2017. A more generious comparison would be: an entry-level laptop 1660-Ti GPU from 2019 is more powerful than my mid-level GTX 970 from 2014.
- Laptops GPUs and most desktop GPUs are not ready for 4K gaming. To run AC Valhalla in 4K at 30 fps, you'd need a previous-gen desktop RTX 2080-Ti or higher. Not even a current-gen desktop RTX 2070 can run it. For Cyberpunk 2077, an RTX 3070 can do.
This chart is a work and progress. I could do a few more things to it:
- Add console performance.
- Add price information. This would become even more relevant once I add consoles.
- Automate information fetching so that I can add more cards easily and even try it on different benchmark tools.
- Add missing Nvidia graphics cards.
- Add AMD graphics cards.
- Add more game thresholds.
- Make it interactive so that people can pick what they want to see. This will become crucial as I add more data.
- Fix the performance threshold for Cyberpunk 2077 for "2k ultra". I meant 4k.
- Produce a similar chart for CPUs.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817153.39/warc/CC-MAIN-20240417110701-20240417140701-00086.warc.gz
|
CC-MAIN-2024-18
| 3,386
| 30
|
https://vlctechhub.org/events/introduccion-a-quarkus-60649a04dcaa/
|
code
|
Jueves 22 de abril de 2021
Nowadays, it's very common to write an application and deploy it to the cloud and not worry about the infrastructure.
In this type of environment where instances are created and destroyed frequently, the time to boot and time to first request are extremely important, as they can create a completely different user experience.
This ends now.
In this session, I will introduce you to Quarkus, a Cloud Native, (Linux) Container First framework for writing Java applications.
Carles Arnal (Software Developer, RedHat)
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988793.99/warc/CC-MAIN-20210507120655-20210507150655-00580.warc.gz
|
CC-MAIN-2021-21
| 541
| 6
|
https://www.fbk.eu/en/event/12656/hackathon-with-microsoft/
|
code
|
Hackathon with Microsoft
EIT Digital – Trento Co-Location Centre
Via Sommarive 18, Povo
On September 11-13, 2019 in Trento, Italy we invite you to join developers from Microsoft and physicists from the LHCb experiment at CERN, Fondazione Bruno Kessler, and the University of Liverpool to participate in a 3-day event focused on learning through hands-on experimentation called OpenHack.
OpenHack is not a scripted step by step guided exercise. At this event, you will work in teams with colleagues from physics, data science, and computer science backgrounds on a number of challenges based on real data designed to mimic the development experience of the LHCb experiment. Microsoft software development engineers and LHCb physicists will be available to help in a coaching capacity.
During OpenHack you will:
- Learn how to approach data analysis for the LHCb experiment at CERN
- Learn how to search for the “unexpected”: apply Microsoft’s ML technologies and deep learning methods to high-energy physics experimental data using anomaly detection
- Network with fellow community members and other PhD students, as well as Microsoft developers
- Enjoy a challenging fun learning environment
Programme and registration at this link.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819273.90/warc/CC-MAIN-20240424112049-20240424142049-00191.warc.gz
|
CC-MAIN-2024-18
| 1,240
| 11
|
https://discourse.nativescript.org/t/discourse-forum-client-for-ns-and-other-communities/2976
|
code
|
Hey fellow community members!
After not finding a suitable desktop client for discourse forums, I created one: https://github.com/sean-perkins/discourse-forum-client. I felt it would be beneficial to share with the community a convenient way to store your sites in a convenient fashion to quickly access them. I know this is a reason I love using Slack.
You can either download it direct from the README or package it yourself if you are concerned about security.
Now go win all those prizes for forum activity
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247504790.66/warc/CC-MAIN-20190221132217-20190221154217-00080.warc.gz
|
CC-MAIN-2019-09
| 510
| 4
|
https://bytearcher.com/articles/es6-vs-es2015-name/
|
code
|
Officially it's ECMAScript 2015 Language
Officially, the name is "ECMAScript 2015 Language" and it's the 6th Edition of the ECMA-262 standard. The specification mentions neither ES6 nor ES2015, though they are handy abbreviations. Before deciding which name to use, let's inspect the release process a little closer.
Including year in the specification name signifies a change in the release process. Previous versions have been gigantic and have been released many years apart. ES6 is the last big release, and future versions will be smaller, and they will be released more frequently.
So far, the trend has been to release a new version each year. In 2016, a year after the previous release, the 7th edition of ECMAScript was released. It contained two new language features. Similarly, 2017 had six new features, and 2018 had eight new features. You can check all versions from TC39 finished proposals.
New features in JS engines before in ECMAScript standard
A new language feature goes through many phases before being included in the specification. It grows from an idea into a commented proposal and into an accepted language feature. Periodically, the committee responsible for the ECMAScript specification collects accepted language features and writes an updated edition of the ECMAScript specification.
In the last stage, before the feature being accepted into the language, the committee requires that two shipping VMs exist that implement the feature. This means that Chrome and Firefox can implement a language feature before it's included in an official ECMAScript specification.
The 'implementation first'-approach means that you will be checking if a JS engine supports a specific language feature instead of supporting a specific ECMAScript version. The situation is similar to CSS, HTML, and browser runtime environment. Instead of checking for a version number, you'd check if Intersection Observer API works in your choice of browsers.
What name to use?
You should talk about
- use ES6 to refer to "ECMAScript 2015 Language" (arrow functions, template strings, Promises), it's shorter than ES2015, and both are unofficial, ES6 was the last big release, and the name is in line with the previous big release ES5, things change after that
- after ES6, use names of language features, such as "globalThis" and "Array.prototype.flatMap", the specification is only updated after working implementations exist in JS engines, check TC39 Finished Proposals for a list of features to be included in the next specification
- for historically referring one year's updates to the ECMAScript specification use ES[year]
Node doesn't wait for your database call to finish?
Learn how asynchronous calls work and make your app run as you intended. Get short email course on asynchronicity and two chapters from Finish Your Node App.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104244535.68/warc/CC-MAIN-20220703134535-20220703164535-00257.warc.gz
|
CC-MAIN-2022-27
| 2,837
| 15
|
https://www.ifae.es/careers/jobs/2022/01/junior-developer/
|
code
|
This position is filled
Open Position Junior Developer
Opening Date: Closing Date:
Who we are
The Engineering department of IFAE is a multidisciplinary team with the mission to make the scientists projects a reality. We are in charge of the design and development of the mechanics, electronics and software for the instrumentation required for each project. IFAE-BIST is the Institute for High Energy Physics (Institut de Física d’Altes Energies) inside the Barcelona Institute for Science and Technology. At IFAE we conduct experimental and theoretical research at the frontiers of fundamental physics, namely in Particle Physics, Astrophysics, Cosmology, and Applied medical Physics. We are involved in the ATLAS project at the LHC at CERN, the T2K neutrino experiment in Japan, the MAGIC telescopes in La Palma, the Dark Energy Survey project in Chile, the ESA Euclid satellite, and the VIRGO Gravitational Waves experiment, among others. We also work at the cutting edge of detector technology developing pixel detectors for High Energy Physics, telescope cameras, and detectors for medical imaging and other scientific and industrial fields.
What we are looking for
We are looking for a C, C++ junior developer to work in the control software at the new LSST at Vera Rubin Observatory (https://www.lsst.org ). We are looking for a candidate with at least 1 year of experience programming in C/C++ with the ability and motivation for research, prototype, and refine designs to solve engineering challenges quickly, practical hands-on ability to create functional designs and a proactive attitude. The candidate must be fluent in English. Knowledge in Java and Python will be valued.
What will be your role
As a Junior programmer of the control software department at the Technical Division of IFAE, you will work in an autonomous way under the supervision of the Head of the Group. You will be integrated inside a team with engineers and it is expected that you work as a team member, proposing solutions and implementing them. You will work in an international collaboration of software teams within the Vera Rubin Observatory, participating in meetings and reporting to them. The engineering department at IFAE is a pool of engineers, so you will participate also in other projects like CTA (Cherenkov Telescope Array) or Virgo (Gravitational Waves), designing and developing software for them.
What we offer:
Full time (40h/week). There will be a trial period of 6 months. Flexible schedule. Work with very interesting science experimental projects (scientific instrumentation, detectors, satellites systems, telescope systems, small experiments). Travelling to singular scientific infrastructures. Opportunity to gain engineering experience learning first-hand. Personal growth, innovation, and learning every day.
Salary will be in the range of 24K€, and can be commensurate with experience and qualifications.
The selection process: Applications should be submitted to email@example.com and they should include a cover letter and CV. Sending CVs to the above address implies consent to the legal warning at the bottom of IFAE’s Home Page.
The application deadline is February 4th.
IFAE is an equal opportunity employer committed to diversity in the workplace, and we welcome applications from all qualified candidates. Women are particularly encouraged to apply. You may contact Otger Ballester (firstname.lastname@example.org ) for any questions related to this job opening.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819847.83/warc/CC-MAIN-20240424174709-20240424204709-00356.warc.gz
|
CC-MAIN-2024-18
| 3,492
| 15
|
https://www.mistriotis.com/2013/08/11/a_sivering_moment_whatever_scares_you.html
|
code
|
This is a tribute to derek sivers
Preface: I used to be a fan of Derek Siver’s blog, sivers.org/blog
where two of his motos clearly registered to me:
I have some easy rule-of-thumbs to follow sivers.org/scares-excites-do-it:
- whatever excites you, go do it
- whatever scares you, go do it
- every time you’re making a choice, one choice is the safe/comfortable choice – and one choice is the risky/uncomfortable choice. the risky/uncomfortable choice is the one that will teach you the most and make you grow the most, so that’s the one you should choose.
Another one that really got into me was the advice towards him on learning how to sing:
… When I was 14 years old, taking guitar lessons from Tom Pecora, he gave me that this-is-important-so-listen-well look, and told me something that stuck with me for life: “You need to learn to sing. Because if you don’t, you’re always going to be at the mercy of some a$$h0le singer.” …
Both of the above came into use when we had to redesign large parts of Cypsel before our initial release. We had decided to drop usernames as a way to address users and use people’s first names instead. This had the consequence that a user registering had to supply his/her first name as part of the registration process.
After a long refactoring session everything was OK with one exception, the registration form looked like this:
At that point we would prefer instead of having an envelope decorating the name input boxes, to use a small person “avatar” instead. Conveniently at that point our (nothing to do with being an XXX-hole) singer designer happened to be on holiday and was generally reluctant to provide the time needed to provide it. Unfortunately for us, the “envelopes” provided were having custom spacing from their edges, which meant that I could not “just” download/buy an icon and place it there.
So one day I woke up with a feeling similar to those old cartoons where the main character has a small devil/angel figure speaking and saying constantly: “draw, draw, draw”. Then decided that I would not stop until I had my little icon done.
First I needed an editor for SVG files (which is the format of the icons used). I knew that the original ones were exports from a tool like photoshop or illustrator, but SVGs are XMLs, which means for our case “formatted text”. I had read some tutorials on their structure on the Mozilla Developer Network, but an actual editor was necessary, something in the lines of… Microsoft paint for SVG.
Additionally it allows to upload an existing image, which just rocks. So now the “problem” was deduced into imitating existing work and not drawing something from scratch, “All drawing is re-drawing”. So first step is to upload the original envelope image:
After some time spend and remembering things from other editors, such as GIMP or Seashore, I started experimenting with the “path tool”:
The aim was to produce an arc that would fit within the original and also be at about 50% of it’s height. After an embarrassing amount of time and with an extra circle on top, the result was something like the following:
So although I was looking one of the most ugly drawings, the difficult part was over: After entering the “scared” and “singing” zone, there was “only” the colouring left as well as remove the original. After 15 minutes of trying and at the same time going nowhere, there was an “A-ha”/eureka moment: Since SVGs are texts, maybe I should continue in a text editor. This was crucial.
To the text editor
Once in the “eyes” of pure text, the last obstacle became obvious: The vector lines are inside the envelope are inside an element that declares the opacity to 0.5, so had to reposition the additional elements inside it. After that it was easy to do the rest, like copy and paste the color values or remove the elements from the folder. A before and after screenshot in the text editor is the following (initial on the left, final on the right):
The new vector file had just to be integrated to the application, with the result as it shows:
Check (and sign up) here: http://www.cypsel.com/users/sign_up?customer=1
At the end the whole exercise fell like an achievement, first endeavour in the zone “of what scares you” while at the same time “singing for my own band” – kind of. And this is also one of the nice things of working for a startup: at some point everyone has to get out of their comfort zone and expand their capabilities.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100674.56/warc/CC-MAIN-20231207121942-20231207151942-00505.warc.gz
|
CC-MAIN-2023-50
| 4,525
| 23
|
https://docs.moogsoft.com/display/070100/Moogsoft+Observe
|
code
|
Moogsoft is currently offering Moogsoft Observe in a limited public Beta. Contact your Moogsoft account representative for more information. See Moogsoft Observe Releases for Observe documentation.
You can integrate Moogsoft AIOps with Moogsoft Observe via two products. Choose your integration process below according to your Moogsoft AIOps and Moogsoft Observe environments:
- Observe: Moogsoft recommends using this integration if you are using SaaS Moogsoft AIOps. It uses a push mechanism which sends Observe events to Moogsoft AIOps.
- Observe Polling: The polling method is useful when Moogsoft Observe cannot push events to Moogsoft AIOps due to firewall or security issues. You may need to use this method if your Moogsoft AIOps is installed on-premises.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578733077.68/warc/CC-MAIN-20190425193912-20190425215912-00378.warc.gz
|
CC-MAIN-2019-18
| 763
| 4
|
https://www.skillshare.com/projects/Noella-Dilber/320253
|
code
|
I played around with a few typefaces. I went with a more clear and readable font. I do not strive for minimalism. However, I do like a more clean look when I present my work. So I applied that in this typography. I ended up choosing Forma for the main titles and Gibson for the flat text.
I tried to play with the layout on each page while having consistent repeating elements. I also added a dot to the main titles and removed it from the flat text at the end.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301737.47/warc/CC-MAIN-20220120100127-20220120130127-00708.warc.gz
|
CC-MAIN-2022-05
| 461
| 2
|
http://eniatv.xyz/archives/1335
|
code
|
Novel–The Legend of Futian–The Legend of Futian
Chapter 2415 – Piety stage prepare
Outside of the ancient mansion, the various cultivators all continued to be there. No one eventually left.
They obviously would not easily totally agree to do this.
Performed Blind Chen imply that the spoils on the Vivid Temple would appear all over again currently?
Blind Chen’s number landed from the damages. Chen Yi, Ye Futian, and also the other people also landed. Powering them, the figures on the cultivators of the a variety of factors stayed drifting in midair. These folks were holding out quietly behind them, waiting for Blind Chen to behave. They were ready to view how he was going to wide open the relic in the Vivid Temple.
Whilst the Wonderful Vivid Domain name was obviously a weakened site, there were still numerous forces found listed here. The leading four big energies were actually all primarily based in this region, making a cl.you.s.ter of robust cultivators. The most robust existences were definitely all cultivators who got survived the 1st period with the divine tribulation in the Terrific Direction.
Obviously, some unexplainable cultivators could occasionally be seen during the Fantastic Vibrant Site. They had been unusual cultivators who originated in this article to pry in to the relic with the Vivid Temple. Even so, every one of them had identified absolutely nothing, and so they would soon depart the location. Just the cultivators of the four significant makes stayed right here completely.
The eye area in the cultivators narrowed after they noticed his terms. An excellent mild flashed with their eye.
Their director was an elder who came out extremely authoritative and very sharp. There were two other elders beside him who also acquired horrifying auras. People were all aged monsters of your Lin clan and seniors of Lin Kong—the clan top of your head of your Lin clan.
Ye Futian obtained heard that Sightless Chen experienced existed for many years. However, he could not often be a cultivator from the past who made it through until the present day, perfect?
He bowed slightly to the Portal of Mild and then prostrated himself on the floor, wors.h.i.+pping the portal. It was actually just like this is his faith, and that he was showing his unequalled piety.
The relic of light-weight obtained not been opened up for many years. Was it likely to start because merely a youngsters had showed up here?
Ye Futian himself failed to recognize. Sightless Chen claimed that Ye Futian could unravel the mystery in the Vibrant Temple. On the other hand, there had been simply a Portal of Lightweight here. What does Ye Futian have to work with?
Ye Futian himself failed to fully understand. Sightless Chen stated that Ye Futian could unravel the puzzle from the Brilliant Temple. Having said that, there was clearly merely a Portal of Gentle here. What does Ye Futian have to use?
In Fantastic Vivid Location, Sightless Chen was still very well known.
Now, why experienced Sightless Chen helped bring the cultivators of Good Shiny Metropolis in this article?
Can it be there existed a link between Blind Chen as well as temple?
This Portal of Gentle also appeared very dangerous.
Many individuals couldn’t assist but consider another examine Ye Futian. Sightless Chen has been waiting for Ye Futian’s introduction, and then he welcome him with light nowadays. Now that Ye Futian was in this article, people were exploring the Portal of Lighting quickly. What have this suggest?
No-one presented signs and symptoms of attacking nowadays. When they saw Sightless Chen striding forwards, they put into practice him and migrated towards the Portal of Mild. The gazes on the cultivators with the Lin clan have been as cold as ice if they stared at Sightless Chen’s rear. Nonetheless, considering the fact that Patriarch Lin failed to a single thing, they suppressed their killing objective and followed regarding him closely.
Their chief was an elder who shown up extremely authoritative and distinct. There have been two other seniors beside him who also experienced alarming auras. These were all aged monsters with the Lin clan and older persons of Lin Kong—the clan brain with the Lin clan.
Sightless Chen still retained onto his crutch. He looked at Patriarch Lin, who withstood in midair, and said, “I have informed your junior well before. Because Lin clan hasn’t had the opportunity to willpower your junior, she is going to in a natural way have to pay the price for which she is doing.”
However, the Bright Temple had been a top pressure in ancient times. Why would Sightless Chen have got a reference to it?
When Ye Futian saw this world, he revealed a strange seem. Who had been Sightless Chen just? Why was he so pious to the Dazzling Temple?
All over the mansion, lots of cultivators believed an extreme force suffocating them.
In Fantastic Dazzling City, Blind Chen was still very famous.
In the event it have been so, it becomes inconceivable.
Using a noisy bang, the doorways of the classic mansion were instantly shattered. The sunshine shroud that stop the wills of the cultivators in a natural way also faded without using a find. The competition looked interior beyond the exterior doors. Then, a team of individuals surfaced.
Now, why had Sightless Chen brought the cultivators of Good Vibrant Metropolis in this article?
He bowed slightly for the Portal of Gentle after which prostrated himself on the ground, wors.h.i.+pping the portal. It turned out almost like that was his faith, and this man was presenting his unrivaled piety.
Performed Blind Chen means that the remains from the Vivid Temple would appear once more currently?
Patriarch Lin glanced around him. Then he checked for the aged mansion. A frightening atmosphere emanated from his physique and enveloped the s.p.a.ce. Most of the cultivators current could feel a majestic tension along with an extremely distinct will acting on them.
All things considered, in the past, those who accessed the Portal of Lightweight ended up with tragic fates.
Novel–The Legend of Futian–The Legend of Futian
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708010.98/warc/CC-MAIN-20221126144448-20221126174448-00578.warc.gz
|
CC-MAIN-2022-49
| 6,130
| 35
|
http://forum.trillek.org/viewtopic.php?f=12&t=725&p=6874&sid=41b71382772356add2680c3b9844efb7
|
code
|
Well, that was quite the article. Definitely puts some aspects of space combat into perspective.
That little tidbit about bigger ships not being as slow as they look made me chuckle. Acceleration is what contributes to that stigma, I think.
Overall, I like the idea of utilizing gyroscopes as much as we can. Most personal ships I imagine will be quite small, so generating enough power to maneuver them quickly and accurately with gyroscopes should be a trifle.
"The best solution to a problem is usually the easiest one." - GLaDOS
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122041.70/warc/CC-MAIN-20170423031202-00023-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 532
| 4
|
https://www.cio-asia.com/resource/applications/microsoft-powerbi-puts-web-and-internal-data-on-the-map/?page=2
|
code
|
Parsing the data in other ways provides other insights as well. In Staten Island, garbage and street infrastructure are an ongoing issues, while in Brooklyn heating and construction noise are the main complaints.
Some users have already found value in Power BI, according to Microsoft.
Global cosmetics manufacturer Revlon, for instance, has found Power BI to be a credible alternative to Oracle Hyperion, the BI tool the company has traditionally used. Revlon had each country's office submit a report to the head office, the material from which was then reorganized into sections for each brand manager. Traditionally, this compilation would take two days. Power BI offers the ability for the brand managers to compile the data themselves on the fly.
"It has empowered the end users to be a lot closer to the results of the business," Kelly said.
The formal release of Power BI also brings with it a number of new capabilities. It can connect to a number of new data formats, including blob storage and table storage in the Microsoft Windows Azure cloud service, as well as data in Microsoft Active Directory and Microsoft Exchange.
Microsoft has also expanded its catalog of public data for Power BI users. It now includes Wikipedia data as well as financial data from Dun & Bradstreet.
Power BI is part of the Office 365 ProPlus service, which costs US$52 per user per month. Organizations that have licensed copies of Office can purchase Power BI and SharePoint online for an additional $40 per user per month. Users of the Office 365 Enterprise E3/E4 packages using a promotion that goes until June would pay $20 per user per month for the additional Power BI capabilities. It will be $33 per user per month after June.
The amount of data that can be stored depends on the Office 365 allotment, which is now 25GB per user. Workbooks are limited to 250GB in size each.
One in four Microsoft Office users are now using Office 365 in some capacity, according to the company.
Sign up for CIO Asia eNewsletters.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944677.39/warc/CC-MAIN-20180420174802-20180420194802-00383.warc.gz
|
CC-MAIN-2018-17
| 2,012
| 10
|
https://dba.stackexchange.com/questions/264102/foreign-key-references-on-column-definitions-are-ignored-feature-not-a-bug-w/264123
|
code
|
Good old references constrains. They work like a charm when defined at the table level.
create table foo (id int primary key); create table bar (id int, foreign key(id) references foo(id)); insert into bar values (1); -- ERROR 1452 (23000): Cannot add or update a child row: a foreign key constraint fails (...)
But if you come from another ecosystem and are used to occasionally define foreign key constrains at the column level, this is what happens:
create table baz (id int references foo(id)); insert into baz values (1); -- happily takes a value that isn't there in foo select id from baz; -- 1
What happens is that the
references has been recognized, but ignored.
It turns out that this is not a bug. The MySQL documentation says they do it, and that's all you need to know:
MySQL parses but ignores “inline REFERENCES specifications” (as defined in the SQL standard) where the references are defined as part of the column specification. MySQL accepts REFERENCES clauses only when specified as part of a separate FOREIGN KEY specification.
The MariaDB documentation is slightly more verbose on their rationale:
MariaDB accepts the REFERENCES clause in ALTER TABLE and CREATE TABLE statements, but that syntax does nothing. MariaDB simply parses it without returning any error or warning, for compatibility with other DBMS's. However, only the syntax described below creates foreign keys.
Now what could be the use for this "feature" that helps "compatibility" with other DBMS — and the standard — by silently breaking the very purpose of the reference, while at the same time, correctly implementing it does not look like a big effort since foreign key constrains are indeed enforced when declared at the table level? And don't tell me this cannot be fixed because people rely on the fact that foreign constrains can be broken when declared at the column level.
Please help me make sense out of this.
EDIT: I just realized that by "compatibility with other DBMS", the MariaDB documentation may actually be referring to MySQL. This could either be a good motive for MariaDB to stick to the (unmotivated) behavior of MySQL, or a missed opportunity to improve their fork.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363215.8/warc/CC-MAIN-20211205160950-20211205190950-00538.warc.gz
|
CC-MAIN-2021-49
| 2,183
| 13
|
https://muut.com/i/flatcam/usage:windows-installation-of-fla
|
code
|
Fri, 01 Sep 2017 06:45:37 GMT
Greetings every one,
I've tried to install the program several times but it doesn't work.
Python is 3.6 32 bit. In WinPython Control Panel installed Rtree (32 bit) and Shapely (32 bit). File FlatCAM.py I've tried to open with python (application). Nothing happened and FlatCAM doesn't show the gerber file.
I've tried to isntall python 2.7 32-bit but the Control Panel doesn't start up.
Installation guide for Windows here, is probably written for those who can python...
Can someone explain how to install the program for windows?
Thanks in Advance,
Sat, 02 Sep 2017 16:24:42 GMT
On Win7 the program works, on win10 - not working. That's the root cause of the problem.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867055.20/warc/CC-MAIN-20180525082822-20180525102822-00401.warc.gz
|
CC-MAIN-2018-22
| 699
| 10
|
https://gitlab.com/ascz/acr_a3/wikis/home
|
code
|
This mod contains various infantry high quality units, vehicles and equipment. Optionaly you can get more vehicles and weapons with cup or rhs mods.
@ACR_A3_CUP + @CUP
@ACR_A3_RHS + @RHS
The mod can be found on these official release mirrors:
Alternatively you can build the project with Mikero's pboProject (include whole acr_a3 folder) - https://mikero.bytex.digital/Downloads or BI Addon Builder.
If you like our work, please consider donating ;)
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315681.63/warc/CC-MAIN-20190820221802-20190821003802-00436.warc.gz
|
CC-MAIN-2019-35
| 449
| 6
|
http://ir.lib.sdu.edu.cn/widgets/sdjgk/?h=gain&job=detail&a_id=YAd4vWUBFjIhTVEb8Tpb
|
code
|
标题:A Privacy Preserving Algorithm Based on R-constrained Dummy Trajectory in Mobile Social Network
作者:Ni, Lina ;Yuan, Yanfeng ;Wang, Xiao ;Yu, Jiguo ;Zhang, Jinquan
作者机构:[Ni, Lina ;Yuan, Yanfeng ;Wang, Xiao ;Zhang, Jinquan ] College of Computer Science and Engineering, Shandong University of Science and Technology, Qin 更多
会议名称:International Conference on Identification,Information and Knowledgein The Internet of Things, 2017
会议日期:October 19, 2017 - October 21, 2017
来源:Procedia Computer Science
摘要:Recently the research of location privacy preserving has become a hot spot. Location privacy preserving involves not only a single location privacy but also the trajectory privacy where the mobile users in different locations publish the consecutive query requests. In this paper, we consider the problem of trajectory privacy preserving in MSN. Particularly, we propose privacy preserving algorithms based on R-constraint dummy trajectory (RcDT). By constraining the generating range R of the dummy positions, the generated dummy positions are within a certain range near the real locations. Furthermore, the dummy trajectories which have higher similarity to the real trajectories are generated via constraining both the single location exposure risk and the trajectory exposure risk.
© 2018 Elsevier Ltd. All rights reserved.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899209.48/warc/CC-MAIN-20200709065456-20200709095456-00159.warc.gz
|
CC-MAIN-2020-29
| 1,389
| 8
|
http://darkspell.wikia.com/wiki/Darkspell_Wiki
|
code
|
Welcome to the Darkspell Wiki
Happy Tuesday! Welcome to the Darkspell Wiki, the wiki for data about Darkspell on Kongregate. Feel free to help out by editing the pages with data you know about!
Darkspell is a forum game on Kongregate, a free flash gaming site. The Forum Games Forum, or FGF, is where the Darkspell series takes place. Darkspell is played by people on the FGF. The creator of this Wiki and Darkspell, Rosate, is also a player in Darkspell, and created this to archive the facts about Darkspell.
See the navigation tabs on the top of the page to traverse our pages.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155413.17/warc/CC-MAIN-20180918130631-20180918150631-00452.warc.gz
|
CC-MAIN-2018-39
| 580
| 4
|
http://holyindia.org/map/kamareddy_to_sirnapalli/distance
|
code
|
Kamareddy To Sirnapalli distanceKamareddy is one of the India city. It is located at the longitude of 78.3 and latitude of 18.3. Sirnapalli is another India city located at the longitude of 78.3 and latitude of 18.5 . The total distance between Kamareddy to Sirnapalli is 18 KM (kilometers) and 18.19 meters. Kamareddy is located nearly south side to Sirnapalli.
Related travel information at Kamareddy to Sirnapalli road map
You are welcome to provide more usefull travel information / answer
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891926.62/warc/CC-MAIN-20180123111826-20180123131826-00468.warc.gz
|
CC-MAIN-2018-05
| 493
| 3
|