Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
This is the course home page of CSC101: Computer Science Without Programming, Fall 2006.
The URL is: http://www.cs.rochester.edu/u/ogihara/CSC101/home.html .
What is Computer Science all about? Should I be interested in Computer Science as a major? Can I explore Computer Science without having to program? This course offers an overview of computer science as a discipline. The course is built upon three major focuses: (1) fundamental concepts in computer science, including hardware, software, information representation, computer operation, algorithms, compilation, debugging, HTML, WWW, and searching on the Internet; (2) traditional topics in the field, including information security, artificial intelligence, human computer interaction, computer systems, and theory of computation; and (3) modern applications of computer science, e.g., bioinformatics, Internet search engines, virtual reality, and electronic commerce. No programming is required. The course has no prerequisite courses, but the students are expected to be able to browse the Internet and edit documents using a word-processor.
- There will be ten quizzes covering the sixteen chapters of the textbook. Quizzes are closed-book, but each student is allowed to bring in one "cheat sheet." This sheet should not be shared with other students. Each quiz will be graded in the scale of 0 to 5 points. The lowest grade of the ten quiz scores will be dropped. Thus, the total points from the quizzes will be 45.
- There will be two in-class, closed-book exams. The "one cheat sheet" in the above rule applies to both of the exams. Each exam will be graded in the scale of 0 to 50 points. Thus, the total points from the exams will be 100.
- There will be ten papers to be written. Five of them will be guest lecture notes. Either two or three of them will be on movies shown in class. Each paper is expected to be three to four pages long. Each paper will be graded in the scale of 0 to 10 points. As in quizzes, the lowest of the ten scores will be dropped. Thus, the total points from the papers will be iether 90.
- There will be one group research project (the maximum size of a group is 3). The topic of the project is yet to be decided. The project will evaluated based on a written group report and an in-class group presentation. Both report and presentation will be evaluated in the scale of 0 to 15. Thus, the total points from the projects will be 30.
- There will be no final exam.
Categories and Their Weights
Category Number Individual Weight Total Weight Quizzes 10 5 45 Exams 2 50 100 Papers 10 10 90 Project 1 15+15 30
The course letter grades will be determined based on the percentage of the total points earned to the total maximum of 260 or 270. The plan as of now is as follows:
Final Letter Grades
Percentage Letter Grade 85% or higher A 70% or higher B, B+, A- 55% or higher C, C+, B- 40% or higher D, D+, C- Less than 40% F
The course textbook is George Beekman and Michael J. Quinn, "Computer Confluence," Seventh Edition, Prentice Hall, 2005.
|
OPCFW_CODE
|
Changing the Owner of a folder back to SYSTEM
ISSUE1:
This is what I get when I try to open my C:/Documents and Settings.Although I am the owner of the computer I was denied permissions to open the folder. So, I changed the owner of Documents and Settings from SYSTEM to Fasih(HP-PC\HP) and this is me. Still I cant access.
Also, I want to revert the owner back to SYSTEM, just in case to prevent anything stupid. So I retraced the steps and now.. I cant change the owner back to SYSTEM (strange).
Help!
NOTE: I realized from another question that this folder doesn't exist any more. So please tell me how to revert the Owner back to SYSTEM
Came to know after I lent him mine, a lil late
no no, I changed the permissions on C:/ Drive to try to access Documents and Settings
I will rephrase the entire questions!
Changing permissions on the system drive C: is a bad idea, back up your data and reinstall the OS, too many different permissions on sub folders on C, may be impossible to restore them.
Did I understand you correctly, that your drive letter C is on the external HDD and not your system volume? While this is possible it is quite uncommon. If the external drive behaves weird, check it's S.M.A.R.T. status with a tool like GSmartcontrol. Do not attempt to write to it (chkdsk, changing ACLs) until you know it is healthy. Depending on the worth of your files, do a sector based backup immediately. You might get better results if you access the drive directly and not through a USB bridge which you probably use at the moment.
Rephrasing the question has been done!!!
and I changed the permissions on C:/Documents and Settings . Disaster Averted! my bad in typing, nervousness has gottn the better of me
Putting two completely different problems in one question is a really bad idea. I suggest you to remove your first question to prevent your question from being closed as it is likely answered in http://superuser.com/questions/49582/access-denied-to-documents-and-settings-on-vista anyway.
While the owner should not matter, you can change it back to the default by running icacls "C:\Documents and Settings\" /setowner SYSTEM
Looks like official documentation for Windows 7 is missing but you can find it for Windows Server here: http://technet.microsoft.com/en-us/library/cc753525(v=ws.10).aspx - Behavior should not differ.
Well, if the owner doesnt matter then I wont worry about it. How does the owner matter anyways?
Being the owner of a file system object always allows you to change permissions on that object, even if you do not have the explicit right to change permissions for that object. This is why you had to become the owner of the junction point to change the permissions. Learn more about permissions in Windows NT based systems on http://technet.microsoft.com/en-us/library/cc783530(WS.10).aspx.
|
STACK_EXCHANGE
|
CD Audio disk won't play in DVD/CD player
Read the Guides forum if you don't know how to do something.
If you have a question or a problem, check the FAQ and use the Search to see if you can find the answer for yourself.
If you're having trouble burning double layer media, read Here.
Still stuck? Create a new thread and describe your issue in detail.
Make sure you include a copy of the program's log in your post. No log =
Posted 23 May 2008 - 09:06 PM
OS: XP Home SP2
Computer: HP Pavilion dv1000 (laptop), 1.5Gb RAM
DVD burner: Sony DVD RW DRV-840A USB (external drive, fresh out of the box)
I created a CD audio disk image using the laptop's internal drive, and burned a few CD's from this image on the internal drive successfully, and they play on all the CD players I've tried them on. I was also successful in burning the same image using an older external drive, but unfortunately it seems to have a mechanical problem and starts scoring the disks after it does a couple of burns. So I went out and bought 2 new, highly rated Sony drives as above.
The burns are successful, verification is successful, and they play fine on the computer and on at least one CD player, but they don't play on one particular CD player, ironically a Sony home theatre. The disk starts to spin up and the player is able to see the number of tracks, etc., but can't seem to find the start point. The disks I burned on the internal drive work fine on that player. Disks burned on either Sony drives are unreadable on the Sony home theatre system (also a DVD player, Super Audio CD player etc.).
So: same computer, same image, same OS, same ImgBurn software, same media, different burner.
What is your best guess as to what the problem is here? Is there anything I can tweak to achieve a better result?
In the interest of full disclosure, yes, I was burning to both Sony drives simultaneously, which turned out to be a faster way of creating a bunch of coasters. But at least one was burned by itself and it too had this problem.
I did not install any of the crappy Nero software that came with the drives, and afaict there was no Sony software that could be installed independently. XP seemed to detect the new hardware and recognize them. And of course the disks are good, on most players.
Posted 23 May 2008 - 09:35 PM
I checked the Sony site, and indeed, no drivers are required, but they have a lot of scary language about having this be the only USB device connected, and to connect it directly and not through a dock (which I happen to have done). Okay, I can try that, but I'm skeptical that that would be a cause since it seems to be an issue of degree (sometimes unreadable), not of kind (like not being able to detect the drive or flat-out write errors). I'll also try a burn from a different computer.
Posted 24 May 2008 - 01:02 AM
But if you know of any tweaks as concerns ImgBurn (maybe it is not waiting long enough for the laser to fire up before starting burn of track one, resulting in marginal laser pitting there? if I skip to track two everything is ok), I will like to hear of them.
Posted 24 May 2008 - 01:08 AM
Posted 26 May 2008 - 04:11 PM
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
OPCFW_CODE
|
__description__ = \
"""
Python implementation of the Gillespie algorithm for simulation of small
chemical system with discrete chemical components.
Gillespie DT (1977). J Phys Chem 81(25):2340-2361
"""
__author__ = "Michael J. Harms"
__date__ = "070508"
from .base import ReactionSimulator
import numpy as np
class Gillespie(ReactionSimulator):
"""
"""
def __init__(self,rxn_input):
self._rxn_input = rxn_input
self.setup_reaction(self._rxn_input)
self._terminated = False
def take_step(self,num_steps=1):
for i in range(num_steps):
self._take_step()
def _take_step(self):
"""
Perform Monte Carlo move.
"""
if self._terminated:
return
rxn_prob = np.zeros(self._num_reactions,dtype=np.float)
for i in range(self._num_reactions):
rxn_prob[i] = np.prod(self._current_conc[self._reactant_slices[i]])*self._rates[i]
total_rxn_prob = np.sum(rxn_prob)
if total_rxn_prob == 0:
self._terminated = True
print("no reactants left")
return
rxn_prob = rxn_prob/total_rxn_prob
# time step
tau = np.log(1/np.random.random())/total_rxn_prob
# Figure out reaction that occurs on this time step
rxn = np.random.choice(range(self._num_reactions),p=rxn_prob)
# update concentrations
self._current_conc[self._reactant_slices[rxn]] -= 1
self._current_conc[self._product_slices[rxn]] += 1
self._time_steps.append(self._time_steps[-1]+tau)
self._conc_history.append(np.copy(self._current_conc))
@property
def terminated(self):
return self._terminated
|
STACK_EDU
|
React engine generates checksum warning when directly launch the server page "http://localhost:8000/trends"
ufocoder,
First of all, deeply appreciate this boilerplace. Very nicely done!
But I am seeing a following warning when launching a page (http://localhost:8000/trends) that requires async data fetching on the server side:
client.js:9281 Warning: React attempted to reuse markup in a container but the checksum was invalid. This generally means that you are using server rendering and the markup generated on the server was not what the client was expecting. React injected new markup to compensate which works but you have lost many of the benefits of server rendering. Instead, figure out why the markup being generated is different on the client or server:
(client) <!-- react-empty: 1 -
(server) <div data-reactroot="
I also observed that the rendering actually is done 2 times: one is the direct server HTML output being rendered. Then the React engine will render it again because the "checksum" mismatch. Then the React engine also thrown the above warning to the console. The view actually flickers upon the 2nd rendering.
One of the major benefit for the "universal/isomorphic" rendering is efficiency - to allow user see the page quicker. With this 2-time rendering, this advantage is lost. Any chance you could find a solution for it?
Thanks again!
William
@williamku Thank you for feedback, I try to fix this issue on this week.
@williamku It was fixed by https://github.com/ufocoder/redux-universal-boilerplate/commit/73ff62100b05822bc45e3776e66c7b8689cfa3af, but It's not a pretty good solution.
I believe my question will answered there https://github.com/makeomatic/redux-connect/issues/62 and I will improve boilerplate
ufocoder,
not sure if you are able to receive my this reply since the mail was
originated from github.
First of all, Thanks for the quick turn-around!
I got your latest fix and it worked to certain extent - at least the React
warning is no longer present. But the server side generated HTML content
seems there is a flickering problem. To find out why, I turned off the
javscript on Chrome and I got the server side HTML which look like this:
[image: Inline image 1]
Clearly, it looked much different from the final result. Then, after
this initial rendering, somehow, the client side style has to be applied
--- i think this is the reason for flickering. I would ideal that this
flickering can be resolved...
But regardless, I want to thank you very much for the quick turn-around.
You have a great day!
William
On Mon, Aug 15, 2016 at 2:50 AM, Ufocoder<EMAIL_ADDRESS>wrote:
Closed #4
https://github.com/ufocoder/redux-universal-boilerplate/issues/4.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/ufocoder/redux-universal-boilerplate/issues/4#event-755335725,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AEOUKnhS6QYjvUt5za6nhNT4h28-k4i4ks5qgAxEgaJpZM4Jj_zY
.
@williamku I can't view your attached image, please check it.
Boilerplate can works in two ways:
build bundlers and run server - use this way for deployment
watch mode - use this way for development
In watch mode all modules load one by one as well as style modules that why I see html layout and then this layout is stylized. But if you run build command and then run server I will get proper way of work for styles.
Check README.md also, please.
ufocoder,
Sorry about the missing image --- i copy-and-pasted in inline - that may be
the reason that it was missing...
Let me include that image here as attachment. Hope it will be picked up
on your side...
I don't think this leftover issue is a big deal. So please resolve it only
if it is convenient for you to do so...
Thanks again...
William
On Mon, Aug 15, 2016 at 9:51 AM, Ufocoder<EMAIL_ADDRESS>wrote:
@williamku https://github.com/williamku I can't view your attached
image, please check it.
Boilerplate can works in two ways:
build bundlers and run server - use this way for deployment
watch mode - use this way for development
In watch mode all modules load one by one as well as style modules that
why I see html layout and then this layout is stylized. But if you run
build command and then run server I will get proper way of work for
styles.
Check README.md
https://github.com/ufocoder/redux-universal-boilerplate/blob/master/README.md
also, please.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/ufocoder/redux-universal-boilerplate/issues/4#issuecomment-239806794,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AEOUKkZkrPKXJqNDesKUMZPf_ta5BPXlks5qgG7xgaJpZM4Jj_zY
.
@williamku To attach image, write a URL to image, then open this thread directly at github.com to check the result. Don't use your mail agent it not attach image properly.
|
GITHUB_ARCHIVE
|
By Brennan Xavier McManus
This post was developed as part of the Columbia University course “Multilingual Technologies and Language Diversity” taught by Smaranda Muresan, PhD and Isabelle Zaugg, PhD. This cross-disciplinary course offering was a joint effort between the Institute for Comparative Literature and Society and the Department of Computer Science, developed through the generous support of the Collaboratory@Columbia.
The more time I spend studying language and language learning, the more impressed I become with those who rely on acquired second languages for their personal, academic, and professional lives. Having felt the struggles of operating in an acquired language firsthand, the challenges faced by those operating in one on a daily basis become all the more apparent, and I am certainly not the only one to marvel at these. In “The Cosmopolitan Tongue: The Universality of English,” John McWhorter expresses a similar sentiment. However, the aspect of his piece that affected me the most strongly was McWhorter’s defense of English as the world’s up-and-coming lingua franca. I do not take my general disagreement with his perspective as an indictment of the validity of his points. Instead, I want to explore the assumptions and implications of English’s current and future status as an internationally used language. It is certainly clear that the seemingly universal presence of English across the world today has its roots in colonial history as opposed to some specific feature of the language itself. As I see it, the fact that English continues to be so widely used is a testament only to the fact that it is functional enough as a common second language, not that it is necessarily the language most intrinsically suited to such a role. And while I don’t propose that humanity can or should adopt one language universally for all aspects of life, I do see the value of a universal second language as a way to overcome practical issues of communication.
Due to the extent to which the internet and related technology have mingled with our daily lives, any lingua franca hopeful would need to be easy to use in an online space. I believe that English’s prominence online is due to many factors, one of which is an aspect of first-mover advantage such that that the entire system has been built primarily using English, for use with English, in the English-speaking world. I believe that individual users should be given more of a choice as to which language they deem appropriate to use online, and that the most widely used language online should not just be decided by default. To this end, it is just as important that the development of technology offer linguistic flexibility as it is for the resulting product to do so. To examine the importance of linguistically flexible software development, it is first important to understand the current status quo and the role of English as the software lingua franca. It is also important to emphasize the personal value of individual languages and the roadblocks that a monolithically English online world presents for non-native speakers. From there, I want to explore paths forward, ranging from the adaptation of English-based programming languages to the implementation of languages implemented using exclusively non-English keywords.
It is no coincidence that the internet today is dominated by text in the English language. The invention of the internet came about when English was already rising as a globally dominant language, and every layer of its infrastructure is built upon technology that uses English as its foundation. This technological head start is analogous to the head start English has enjoyed in areas including “popular culture, scholarship, and international discourse” (McWhorter 66). Anecdotally, in my own experience on the Internet, encountering content that features grammatically perfect prose posted by non-native English speakers is quite commonplace.
The software underlying the internet as it exists today was and continues to be developed with the use of English-centric technologies, programming languages, and textual encoding. Even the adoption of Unicode as a viable international text encoding standard was only possible as a result of the fact that Unicode does not break compatibility with the ASCII set of encodings in its original state. According to Daniel Prado in his piece “Language Presence in the Real World and Cyberspace,” despite many advancements in generalizing and localizing the internet, “English remains the language of programming, markup, coding, communication between servers and most importantly, the bases of computer languages. Computer languages are based on English, and computer scientists are professionally required to know it” (43). This basis on English means that a given piece of code, regardless of the names you give items in your program, will contain English words like for, do, while, if, else, and goto. The fact that computer science as a discipline effectively requires English proficiency leads to a situation in which the population upgrading, maintaining, and expanding the internet is comfortable in English and much less likely to push against the online English monolith.
The notable prominence of English online necessarily brings with it issues of accessibility. Even McWhorter, who offers support of English as the world’s language, notes that “most immigrants who actually try to improve their English skills here in the United States find that they have trouble communicating effectively even with doctors or their children’s schoolteachers” (62). Even if the transition towards some strongly dominant single language is unavoidable, the time such a process takes means that large populations will be left behind as immigrants to the English internet, and some who will never achieve English fluency in their lifetime could find themselves left out entirely. Moreover, the personal languages of each individual carry with them a value that I believe goes beyond the aesthetic, and the internet as a medium of communication would miss out on the unique ways in which people can communicate in the true fluency of their mother tongue. What’s more, the effect of the English monolith on native languages could be one that accelerates the rate of declining usage. In his paper “Digital Language Death,” András Kornai notes, “A language may not be completely dead until the death of its last speaker, but there are three clear signs of imminent death observable well in advance. First, there is loss of function, seen whenever other languages take over entire functional areas such as commerce. Next, there is loss of prestige, especially clearly reflected in the attitudes of the younger generation. Finally, there is loss of competence” (Kornai 1). Due to the increasing role of the internet in our lives, the “functions” of daily life can certainly be taken to include capacity to engage with the online world. As a result of the accessibility issues surrounding non-English languages online, their growth may not just be impeded, but their continued existence threatened. The further the technologies of the internet go down this path, the more at-risk other languages become.
I believe that a potential approach to preventing or at least slowing such a process could be the expansion of software development tools to a wider range of regional languages. In the past, I have noticed a tendency in code written in non-English speaking countries, where the program itself is implemented in English-based programming languages like Python, C, or Java, but the inline comments are in the local language and usually the local script. I have experimented personally with adapting C++ code to use Mandarin keywords using textual preprocessing, but I believe more elegant adaptations from existing English-based languages are possible. More in the spirit of breaking the trend of exclusively English-oriented development would be the implementation of programming languages with no connection to English. I was very happy to learn of the existence of Qalb (قلب, meaning heart), a programming language that uses Arabic exclusively. In Steven R. Loomis, Anshuman Pandey, and Isabelle Zaugg’s “Full Stack Language Enablement,” they describe the idea of programming in one’s mother tongue as “the final frontier for the internationalization/localization of digital technologies.” Languages like Qalb represent an important step in pursuing that goal. While it would be a large paradigm shift from programming today, I think the nature of this problem, and the fact that a programming language is something that can be implemented by a small group or even a single person, serves to emphasize the large impact an individual can have in the world of emerging multilingual technologies.
Kornai, András. “Digital Language Death.” PLoS ONE , vol. 8, no. 10, 22 Oct. 2013, pp. 1-11, doi:10.1371/journal.pone.0077056.
Loomis, Steven R, et al. “Full Stack Language Enablement.” Steven R. Loomis, 6 June 2017, srl295.github.io/2017/06/06/full-stack-enablement/.
McWhorter, John. “THE COSMOPOLITAN TONGUE: The Universality of English.” World Affairs, vol. 172, no. 2, 2009, pp. 61-68. JSTOR, http://www.jstor.org/stable/20671445 .
Prado, Daniel. “Language Presence in the Real World and Cyberspace,” translated by Laura Kraftowitz. Net.lang: Towards the Multilingual Cyberspace, edited by Laurent Vannini and Hervé Le Crosnier, C & F éditions, 2012, pp. 34-51.
|
OPCFW_CODE
|
- This event has passed.
DSI Workshop: Working with and Creating R Packages
January 19, 2018 @ 9:30 am - 12:00 pm
Working with and Creating R Packages
This Data Science Initiative (DSI) workshop is all about R packages – both using them effectively, creating them for your own use, and how to create and publish packages for others. Packages can be used to simplify your own workflows by organizing and installing data and/or your own code across different projects and across different machines. They also share your ideas and functionality with others and can count as “publications”. Packages are very simple to create, once you know how. And this workshop will address all levels of working with packages. This workshop is led by the DSI Director and Professor of Statistics, Duncan Temple Lang. For questions or to send suggestions for future workshop topics, email email@example.com.
Only basic knowledge of R required.
No registration is required for this event.
- What is a package?
- An installable unit (a collection of functions and/or data)
- should have documentation
- should be easy to install and share
- Incentive for organizing data into packages?
- It’s easier to install a package rather than sourcing all the files (data, functions, etc.)
- Working with packages:
- install.packages(“package.tar.gz”, repos=NULL)
- repos=NULL for if you’re working locally. Important if you don’t want R to go to CRAN to get it. But, it won’t get dependencies.
- devtools:install_github”dsidavis/RMinPackage”). Installs directly from source. Potential problems?
- package, especially packages under development, may not be stable (will it install?)
- source could be changing
- it may not be binary (i.e., you could need a compiler)
- Installing from shell can be faster:
- R CMD INSTALL package.tar.gz
- R CMD INSTALL packageDirectory
- Setting your repository:
- options() $repos
- not all mirrors are made equal
- Where is a package installed?
- library function loads a package. But, a library is really a directory containing multiple installed packages
- How do we tell R where to look for its packages? ~/Rpackages
- library(package, lib.loc=”~/Rpackages”)
- available.packages( ) goes to mirror and asks for all packages (returns metadata)
- list.files(.libPaths()) gives package names
- ~/.Rprofile can be a text file with code and it will get run every time you open R. In there you can put in commands to call .libPaths() and have it conditionally decide what to do based on the machine you are on, etc.
- Or, can specify the shell environment variable R_LIBS or R_LIBS_USER to be a colon separated list of directories
- Package – loading and attaching
- when install a package, it just downloads and puts it into your directory.
- to load a package, typically use library()
- library(package, pos = #) lets you set where it appears in the search path
- if you don’t want to attach it, you have to use :: operator (example: readr::read.csv)
- example: function called trim can’t be seen in the XML package, but if you use XML:::trim (triple colon) lets you be able to see it
- the call library() loads AND attaches the package. When use :: it just loads but NOT attaches (won’t have conflicts in the search path)
- when get rid of a package use detach()
- Writing a package
- you can turn your utility code into a package
- if writing it for yourself, you don’t have to include documentation, etc (but you should!)
- if you aren’t sharing with others, it’s a lot easier to make package
- writing a GOOD R package is software engineering
- needs to be robust: check class/type of inputs, lengths, NAs
- provide meaningful error messages
- provide good documentation (help pages are required; examples are also very useful, vignettes are even more useful)
- make functions flexible
- you can get academic credit for writing a package
- Anatomy of a package
- RStudio can click “make a package”; can also use kitten to make a package
- directory required to have:
- DESCRIPTION file
- Package: name of the package; doesn’t have to be the same as the name of the directory you create for the package
- Title: more info about the package
- Author: name
- Maintainer: who will be maintaining the package and email address
- Version: number formatted as: major-minor.patch
- Collate: useful to tell the order in which to load the functions (especially important if being installed on machines with different alphabets!)
- NAMESPACE file
- R directory
- Example with making new package RMinPackage
- R CMD INSTALL RMinPackage
- additional directories
- man – help files
- tests – tests to check the package
- data – data provided by the package
- inst – other files to be installed with the package
|
OPCFW_CODE
|
by Alan Peterson - Monday, March 7, 2022
With waterfowl seasons over and turkey season not yet here, game-chasers like me are itchy to get back into the outdoors, but don’t forget there are other hunts you can do in the meantime. With a little research and maybe a phone call or two, you can discover opportunities to satisfy your hunting passion and keep the freezer stocked 365 days a year by hunting invasive, or non-native, species.
Invasive species have infiltrated nearly every corner of the Northern Hemisphere. Whether they were intentionally released, or they escaped or they hitchhiked, most of them have worn out their welcome. State and federal agencies, businesses and private-land owners are now spending time and money to control these species. Very often, a hunter can help solve their problem and do it for free. Here is a list of species that offer year-round hunting opportunities. Of course, be sure you are thoroughly familiar with local laws and get landowner permission when considering a particular hunting location.
Eurasian Collared Doves
Originating around the Bay of Bengal, the Eurasian collared dove made its way through Asia, across the Middle East and into Europe. According to one story from the early 1970s, several birds escaped from a pet store in the Bahamas and ended up in Florida. Now they are found throughout North America. More likely to be found in suburban areas or farmland, they feed on waste grain and other seeds. Larger than a mourning dove, they are distinguished by their square, white-edged tail and black stripe across the back of their neck. Hunting them is similar in most respects to hunting mourning dove in that they respond to decoys and frequent water and feed locations on a regular schedule. Collared doves are great eating and are larger than other doves. It's tough to beat bacon-wrapped dove poppers on the grill. Greater success can often be found by locating a cattle or dairy operation where waste grain is easily found. Gaining permission is often easy because pests eat up expensive grain meant for cattle and other farm animals. By removing the freeloaders, you’re freeing up food for those animals. In most locations, there are no season or bag limits. Many states don’t even require a license to hunt these foreign invaders.
Ubiquitous and rarely protected, pigeons and starlings are often found in the same locations you find collared doves, making for multiple opportunities on the same hunt. These birds ravage grain stores like feedlots and silos. If a pigeon eats between 2-4 ounces of grain a day, 25 pigeons could consume nearly 6 pounds of grain every day. Over time, and with greater numbers, you can see why they are referred to as “pests,” and why farmers might be happy to have your assistance. I’ve even had instances where farmers offered to buy the shells for getting rid of these birds. Make no mistake: Pigeons can be a challenging quarry. Opt for No. 6 shot rather than No. 7½ or 8. Pigeons are fair eating and used to be called “squab” in restaurants. Like collared doves, most locations do not manage pigeons, which can be taken without concern for bag limits, seasons or licenses.
Iguanas and Pythons
Two other critters that first gained fame in Florida and have attracted a great deal of attention are green iguanas and Burmese pythons. They have since moved north and west where climates stay warm enough for these cold-blooded reptiles. Pythons eat anything they can get their mouths around and have devastated native bird, mammal, reptile and amphibian populations. They can be hunted 365 days a year on private lands without a license and there are no bag limits. They also can be taken on 25 state-run wildlife management areas. There are some restrictions on state lands relative to method of take but there are no seasons or bag limits. The Florida Wildlife Conservation Commission encourages everyone to remove these destructive snakes when possible. Since March 2017 more than 4,000 invasive Burmese pythons have been removed from the Everglades subtropical wilderness in southern Florida. You can even apply to be part of a “Python Action Team” and get paid for your removal efforts.
Green iguanas showed up in Florida in the ’60s. They have since made themselves right at home. They are destructive to fruit and vegetable crops, and they also destroy seawalls, dikes and canals through their digging and burrowing behavior. The same rules apply to iguanas as pythons. And, of course, like every other unusual table fare, “they taste just like chicken.”
An important note is that Florida law specifies that even unwanted species like Burmese pythons and green iguanas are protected by animal cruelty laws. As hunters, of course, we work to kill our quarry quickly and humanely.
A semi-aquatic rodent species originally from South America that is wreaking havoc in Gulf Coast states, the Chesapeake, lower Mississippi, and the Northwest is the nutria, a large orange-toothed, muskrat-looking critter that can weigh as much as 20 pounds. Under the right conditions, biologists say that a single pair of nutria can produce more than 16,000 offspring in three years. Introduced for the fur trade between 1899 and 1930, nutria escaped and were released into an environment with no natural predators and are now found in nearly 20 states. Nutria burrow and destroy water management structures, eat up to 25 percent of their body weight in valuable marshland vegetation, and can carry tuberculosis and septicemia. This is a creature people want to see removed, and hunters can help. In Louisiana alone, a quarter of a million nutria were harvested in the 2018-2019 season. Louisiana, for example, has a trapping and hunting program that can provide a bounty on each nutria taken. Texas requires a license and has other limitations, but the season is year-round. Other states like Oregon have no seasons or bag limits. And, of course, the critter tastes just like chicken.
Not much needs to be said about feral hogs. This destructive and delicious beast is here to stay. Their destructive habits lead to mind-boggling damage figures. According to the USDA, “ … this invasive species costs the United States an estimated $1.5 billion each year in damages and control costs. Feral swine also threaten the health of people, wildlife, pets and other domestic animals.” According to one NRA article, “research suggests the average pig causes $200 in damages annually, though the actual figure likely is even higher.”
No wonder a hog hunt is one of the least expensive hunts you can undertake, particularly in Texas, which houses more than half the nation’s hog population. Or you can go “whole hog,” and pursue them from helicopters with AR-platform shotguns. There is a way to adapt a hog hunt to suit just about any hunter’s age, ability or budget.
You don’t have to go to Africa to go on safari. Several states manage healthy populations of non-native, or exotic, big-game species that used to be available only after spending two days in airports and on planes to reach the “Dark Continent.” Non-native or exotic species are not subject to the typical hunting seasons and bag limits of native wild-game species and are available to hunt all year. Texas leads the charge in offering opportunities to hunt exotics, with outfitters offering everything from the regal, spotted axis deer (as shown in the opening photo) to blackbuck, kudu, sable, wildebeest and more. Speaking of axis deer, which was introduced into Texas in the 1930s, all you need is a hunting license to be able to take them year-round. Another popular Texas opportunity is the aoudad, also called the Barbary sheep, that was introduced in the 1950s and now competes with the state’s desert bighorn populations.
A Note on Coyotes
No article on year-round hunting opportunities should go without mentioning the option to pursue coyotes. Native to North America, this species is responsive—not invasive—and represents one of Mother Nature’s most familiar and adaptable creatures. Found just about everywhere, from deserted deserts to Rodeo Drive in Beverly Hills, coyotes have dug in and not only survived but thrived. As this NRA website notes, the species’ range and numbers make for year-round hunting opportunities. Many states even provide a bounty on coyotes. With their speed, extraordinary sense of smell and eyesight, and uncanny instincts, coyotes are a challenge to hunt. They also provide an opportunity to practice long-range shooting skills and can help prepare you for other big game hunts.
So, while we often yearn for spring and fall hunts to roll around, remember you can sidestep the doldrums. There are plenty of opportunities to get out there and help control unwanted pests and add to your stock of wild game. Because of invasives, there is no reason you can’t scratch that hunting itch all year long.
E-mail your comments/questions about this site to:
|
OPCFW_CODE
|
what gcc main.c means ?
The GNU Compiler Collection (GCC) is a compiler system produced by the GNU Project supporting various programming languages. GCC is a key component of the GNU toolchain and the standard compiler for most projects related to GNU and Linux, including the Linux kernel. The Free Software Foundation (FSF) distributes GCC under the GNU General Public License (GNU GPL). GCC has played an important role in the growth of free software, as both a tool and an example.
The Four Stages of Compiling a C Program
Compiling a C program is a multi-stage process. At an overview level, the process can be split into four separate stages: Preprocessing, compilation, assembly, and linking.
In this post, I’ll walk through each of the four stages of compiling the following C program:
* "Hello, World!": A classic.
The first stage of compilation is called preprocessing. In this stage, lines starting with a
# character are interpreted by the preprocessor as preprocessor commands. These commands form a simple macro language with its own syntax and semantics. This language is used to reduce repetition in source code by providing functionality to inline files, define macros, and to conditionally omit code.
Before interpreting commands, the preprocessor does some initial processing. This includes joining continued lines (lines ending with a
\) and stripping comments.
To print the result of the preprocessing stage, pass the
-E option to
cc -E hello_world.c
Given the “Hello, World!” example above, the preprocessor will produce the contents of the
stdio.h header file joined with the contents of the
hello_world.c file, stripped free from its leading comment:
[lines omitted for brevity]extern int __vsnprintf_chk (char * restrict, size_t,
int, size_t, const char * restrict, va_list);
# 493 "/usr/include/stdio.h" 2 3 4
# 2 "hello_world.c" 2int
The second stage of compilation is confusingly enough called compilation. In this stage, the preprocessed code is translated to assembly instructions specific to the target processor architecture. These form an intermediate human readable language.
The existence of this step allows for C code to contain inline assembly instructions and for different assemblers to be used.
Some compilers also supports the use of an integrated assembler, in which the compilation stage generates machine code directly, avoiding the overhead of generating the intermediate assembly instructions and invoking the assembler.
To save the result of the compilation stage, pass the
-S option to
cc -S hello_world.c
This will create a file named
hello_world.s, containing the generated assembly instructions. On macOS 10.10.4, where
cc is an alias for
clang, the following output is generated:
.macosx_version_min 10, 10
.align 4, 0x90
_main: ## @main
.cfi_offset %rbp, -16
movq %rsp, %rbp
subq $16, %rsp
leaq L_.str(%rip), %rdi
movl $0, -4(%rbp)
xorl %ecx, %ecx
movl %eax, -8(%rbp) ## 4-byte Spill
movl %ecx, %eax
addq $16, %rsp
.cfi_endproc .section __TEXT,__cstring,cstring_literals
L_.str: ## @.str
.asciz "Hello, World!"
During this stage, an assembler is used to translate the assembly instructions to object code. The output consists of actual instructions to be run by the target processor.
To save the result of the assembly stage, pass the
-c option to
cc -c hello_world.c
Running the above command will create a file named
hello_world.o, containing the object code of the program. The contents of this file is in a binary format and can be inspected using
od by running either one of the following commands:
od -c hello_world.o
The object code generated in the assembly stage is composed of machine instructions that the processor understands but some pieces of the program are out of order or missing. To produce an executable program, the existing pieces have to be rearranged and the missing ones filled in. This process is called linking.
The linker will arrange the pieces of object code so that functions in some pieces can successfully call functions in other ones. It will also add pieces containing the instructions for library functions used by the program. In the case of the “Hello, World!” program, the linker will add the object code for the
The result of this stage is the final executable program. When run without options,
cc will name this file
a.out. To name the file something else, pass the
-o option to
cc -o hello_world hello_world.c
You can see how magnificent to write just one line, a whole operation happens within seconds and you don’t even notice that it happened.
|
OPCFW_CODE
|
A job worker is a service capable of performing a particular task in a process.
Each time such a task needs to be performed, this is represented by a job.
A job has the following properties:
- Type: Describes the work item and is defined in each task in the process. The type is referenced by workers to request the jobs they are able to perform.
- Custom headers: Additional static metadata that is defined in the process. Custom headers are used to configure reusable job workers (e.g. a
notify Slackworker might read out the Slack channel from its header.)
- Key: Unique key to identify a job. The key is used to hand in the results of a job execution, or to report failures during job execution.
- Variables: The contextual/business data of the process instance required by the worker to do its work.
Job workers request jobs of a certain type on a regular interval (i.e. polling). This interval and the number of jobs requested are configurable in the Zeebe client.
If one or more jobs of the requested type are available, Zeebe (the workflow engine inside Camunda Cloud) will stream activated jobs to the worker. Upon receiving jobs, a worker performs them and sends back a
fail command for each job, depending on if the job could be completed successfully.
For example, the following process might generate three different types of jobs:
Three different job workers, one for each job type, could request jobs from Zeebe:
Many workers can request the same job type to scale up processing. In this scenario, Zeebe ensures each job is sent to only one of the workers.
Such a job is considered activated until the job is completed, failed, or the job activation times out.
On requesting jobs, the following properties can be set:
- Worker: The identifier of the worker. Used for auditing purposes.
- Timeout: The time a job is assigned to the worker. If a job is not completed within this time, it can be reassigned by Zeebe to another worker.
- MaxJobsToActivate: The maximum number of jobs which should be activated by this request.
- FetchVariables: A list of required variables names. If the list is empty, all variables of the process instance are requested.
Ordinarily, a request for jobs can be completed immediately when no jobs are available.
To find a job to work on, the worker must poll again for available jobs. This leads to workers repeatedly sending requests until a job is available.
This is expensive in terms of resource usage, because both the worker and the server are performing a lot of unproductive work. Zeebe supports long polling for available jobs to better utilize resources.
With long polling, a request will be kept open while no jobs are available. The request is completed when at least one job becomes available.
Zeebe decouples creation of jobs from performing the work on them. It is always possible to create jobs at the highest possible rate, regardless if there is a job worker available to work on them. This is possible because Zeebe queues jobs until workers request them.
This increases the resilience of the overall system. Camunda Cloud is highly available so job workers don't have to be highly available. Zeebe queues all jobs during any job worker outages, and progress will resume as soon as workers come back online.
This also insulates job workers against sudden bursts in traffic. Because workers request jobs, they have full control over the rate at which they take on new jobs.
Completing or failing jobs
After working on an activated job, a job worker informs Camunda Cloud that the job has either
- When the job worker completes its work, it sends a
complete jobcommand along with any variables, which in turn is merged into the process instance. This is how the job worker exposes the results of its work.
- If the job worker can not successfully complete its work, it sends a
fail jobcommand. Fail job commands include the number of remaining retries, which is set by the job worker.
remaining retriesis greater than zero, the job is retried and reassigned.
remaining retriesis zero or negative, an incident is raised and the job is not retried until the incident is resolved.
If the job is not completed or failed within the configured job activation timeout, Zeebe reassigns the job to another job worker. This does not affect the number of
A timeout may lead to two different workers working on the same job, possibly at the same time. If this occurs, only one worker successfully completes the job. The other
complete job command is rejected with a
NOT FOUND error.
The fact that jobs may be worked on more than once means that Zeebe is an "at least once" system with respect to job delivery and that worker code must be idempotent. In other words, workers must deal with jobs in a way that allows the code to be executed more than once for the same job, all while preserving the expected application state.
|
OPCFW_CODE
|
Our computing needs are ever-evolving, with many tasks requiring regular or timely execution. Is it efficient if your PC could automatically handle these without continuous reminders or manual initiation? Enter the Windows Task Scheduler, an unsung marvel that brings automation to your fingertips. Let’s unravel this powerful tool and discover how to use it to your advantage.
Deciphering the Windows Task Scheduler
Windows Task Scheduler is a component that enables users to automate tasks in a pre-defined manner. Whether it’s launching a specific application at startup, sending emails at particular times, or even running scripts periodically, Task Scheduler can handle it all.
The Necessity of Task Scheduler
- Timely Operations: Automate tasks that need execution at specific times, like backups or updates.
- System Maintenance: Schedule routine maintenance tasks during off-hours to avoid disruptions.
- Resource Intensive Tasks: Plan heavy-duty tasks when you’re not actively using the computer, ensuring smooth performance.
Navigating the Task Scheduler Interface
- Accessing the Tool: Type “Task Scheduler” in the Windows search bar and click on the application that appears.
- Task Scheduler Library: This section showcases all scheduled tasks. You can explore jobs set by Windows, applications, or those you’ve created.
- Action Panel: Here, you can create a new task, import, or manage existing ones.
Creating a New Scheduled Task
- Begin the Process: Click ‘Create Basic Task’ or ‘Create Task’ in the right panel. The former is for simple tasks, while the latter offers more customization.
- Name and Describe: Always provide a meaningful name and description so you remember the task’s purpose.
- Triggers: Define when the task starts. It could be at a specific time, upon login, or even when a particular event occurs.
- Actions: Specify what the task does. Everyday activities include starting a program, sending an email, or displaying a message.
- Conditions: Set conditions like whether the task should run on battery power, when the computer is idle, etc.
- Settings: Adjust settings like task failure behavior, multiple instance scenarios, etc.
The World of Event-Based Triggers
Beyond time-based automation, Task Scheduler excels with event-based triggers:
- Logon Events: Automate tasks every time a user logs on.
- Startup/Shutdown: Trigger actions upon system boot-up or shutdown.
- Specific Events: Start tasks when particular events occur, like a software installation or specific system warnings.
Tips to Master Task Scheduler
- Use Descriptive Names: Always name tasks in a way that instantly reminds you of their functions.
- Test Tasks: After creating a task, run it manually once to ensure it works as expected.
- Monitor Task History: The Task Scheduler maintains a log. Regularly review it to ensure tasks run successfully or to troubleshoot failures.
- Security: Only give tasks the permissions they need. Avoid running errands as administrators unless necessary.
Advanced Capabilities: Task Scheduler’s Extended Power
- Multiple Triggers and Actions: A task can have multiple triggers and actions, allowing complex automation.
- Delaying Tasks: Introduce delays before task execution; this is especially useful if you don’t want it to start immediately after a trigger.
- Repeating Tasks: Set tasks to repeat at intervals, e.g., every 15 minutes after the initial trigger.
- Terminate Conditions: Specify when a running task should terminate, ensuring it doesn’t hang indefinitely.
Tackling Common Issues in Task Scheduler
- Failed Execution: Check the task’s history for error codes. Often, issues like incorrect paths or permissions are the culprits.
- Missed Tasks: If a task doesn’t run at the specified time (maybe the computer was off), you can configure it to execute when the PC is next active.
- Broken Triggers: Occasionally, triggers may not initiate tasks. Ensure you’ve set the trigger conditions correctly.
Safety and Task Scheduler
- Avoid Sensitive Data: When scheduling tasks that require passwords or other sensitive data, ensure it’s stored securely.
- Review Regularly: Periodically check the Task Scheduler library to ensure no malicious or unwanted tasks are running.
- Limit Permissions: Don’t provide tasks with unnecessary system permissions. This minimizes potential damage if something goes awry.
With its robust automation capabilities, the Windows Task Scheduler proves to be an indispensable asset for casual users and IT professionals. By embracing its functionalities, you optimize your PC’s operations and elevate your efficiency and productivity. Dive into Task Scheduler, experiment with its myriad features, and let your computer handle the routine, giving you more time for what truly matters!
|
OPCFW_CODE
|
Object-Oriented Data Programming: C# Meets Cachéby Jesse Liberty
Virtually every meaningful commercial ASP.NET or Windows application stores and retrieves data. Most programmers use a relational database for data storage, and Microsoft has gone to great lengths to create a set of classes that mediate between the object-oriented viewpoint of C# and VB 2005 programming on the one hand, and the tables/rows/columns perspective of relational databases on the other (ADO.NET).
There are, however, other ways to store your objects, one of which is to use an object-oriented database. The idea would be to create objects, and then just store them and retrieve them without thinking at all about tables/rows/columns or even relations!
Now, with that in the back of your mind, consider this problem: many hospitals and other institutions have been using a non-relational database called MUMPS (Massachusetts General Hospital Utility Multi-Purpose System) for the past 40 years (!). They are deeply invested in this technology, and, hey! it works. In 1994 InterSystems bought up various versions of MUMPS and in 1997 they released Caché, using the MUMPS storage engine and language, but with object-oriented services on top. They realized that these institutions do not want to walk away from their data, but they do want to use modern languages, such as Java and C# to create the front end to their new applications.
MUMPS and C#
Flash forward to today. I was asked by the Dana-Farber Cancer Institute (Partners Health Care) in the Boston area to help a group of in-house developers create a new system for their pharmacy that will use MUMPS Globals (the underlying storage) but with a .NET 2005 Windows Forms front end.
Originally I thought, "OK, I'll build the front end, and someone else will worry about this weird non-relational (InterSystems calls it "post-relational") database. But it turns out that you can create objects in Caché (and let it write the globals automagically), and then use their wizard to import Caché objects into (proxy) C# classes.
This was too good not to write about, because it illustrates so many interesting ideas:
- How do you revitalize 40-year-old data technology in the world of objects?
- What is .NET programming with an object-oriented database like?
- How do proxy objects allow C# programmers to thinks they are programming against an object that is really just a set of DB calls?
Actually, there are many other nifty lessons to be learned along the way, but the best way to see them is to create an application. We're going to create a very simple application, and walk through all the steps, which include:
- Design the objects without regard to implementation
- Design the Caché objects that correspond to our ideal objects
- Implement the Caché objects
- Design a C# project that thinks about persistence in terms of objects rather than data sets and tables
- Import the Caché objects into C# using the Caché Wizard
- Interact with the objects in C#. Create new objects, update existing objects, and store objects
Building a Demo Application
To illustrate how this all works we want to build a very simple application that lets you pick from a list of patients and see what medications they take, in what dosage, and by what route (by mouth, IV, etc.).
To implement this, let's think about what objects we'll need, first without regard to implementation details.
Clearly we need a
Patient class, and we can imagine that the
Patient class inherits from
Person providing the name, address, phone numbers, etc. and
Patient providing the patient ID, list of prescriptions, list of doctors, list of disorders, etc.
Let's think about
Prescriptions. To start, there is a
Medication (e.g., Lexapro), a
Dosage (e.g., 20mg), and a
Route (e.g., by mouth). There is also a prescribing doctor, a prescription date, the date the prescription was dispensed, how many were ordered, how many were actually delivered, and so forth.
How do we handle the
Medication? Many medicines come only
in certain dosages, and some are safe only via certain routes (never give yourself
an intravenous (IV) epinephrine injection). Ahhh, complexity. For the purpose
of this demo, we'll simplify, and create the following truncated classes:
Of course, in a real application, there would be many more fields, and some of the simple fields here would be more complex (for example, the patient's doctor would be of type
Physician which in turn would derive from
Notice that the
Route classes contain only a single member. These classes will be used as lookups, so that we can fill drop-down boxes with legitimate values.
To create even this simple application, we'll need the following forms:
- Creating and Deleting Routes
- Creating and Deleting Medication Names
- Creating, Editing, and Deleting Patients (within which will be a list that allows you to add and delete prescriptions)
- Picking a Patient to review or edit
|
OPCFW_CODE
|
This lesson shows you how to add the layout to the list view that will display a paginated list of records.
Adding the list layout
Welcome back. In the last lesson we left you in the browser, so jump back to your editor and find and open the default.php file in the views/messages/tmpl folder. Add the file header in the normal way and then look for the snippet called “Backend list layout”.
Now, there’s not many variables in this one. We just need to type in the option for the component, that’s the name of the component folder “com_hello”. Then we type in the name of the list view, in our case “messages”, and then the view for the single record or item. In our case that will be just “message”. All these variables are in lowercase. When you’ve filled them all out, insert the snippet and let’s walk through the layout.
First of all, we initialise a few shortcuts to variables that are used frequently. We get the user object as we’ve seen before, and also the order column and direction. Remember in the view class, we had a protect property called state that we assigned a value to. Well, this is where and how we start using that information, simply by accessing those properties directly via the $this variable.
Just a quick note - the video is a little out of date because the ordering variable should be treated with the view escape method to protect against malicious injection attacks. The snippet has been updated to reflect this and I’ll explain what escaping does soon.
Now, the rest of the code is more or less self-explanatory because we are using HTML to frame the display of data. But, in between the HTML, we break into and out of the view API and that’s where I really want to focus in this lesson.
Starting the list form
The FORM tag is nothing special, but we have used Joomla’s router to process the form action. We do this by calling a static method of the JRoute class called underscore (_). Joomla’s API tends to use underscore frequently to represent commonly used utility methods. While there is no SEF support for URL’s in the backend, yet, it’s a way to future-proof your code in the event that it one day becomes supported. We’ll dig deeper into the component router when we look at frontend components.
The list table header
The form wraps a TABLE that we’ll be using to display the list of records, and we begin with the THEAD section. The first column is a checkbox that we use to toggle the selection state of the checkboxes assigned to each row of the table. As you can see, this element has an onclick event to make this happen.
Next, is the start of the main column headings and the first heading is for the title of the record. Now, here’s where we introduce some API to help us get clickable headings to change the sorting column of the list. For this, we introduce a new class called JHtml, and this is a helper class that does many things related to HTML output. It too, has an underscore method to access the most commonly used functionality. The JHtml class has a lot of helpers and these are all grouped in separate files. We access individual functions in these helpers by using a dot notation in the first argument. So, you can see we are using a helper method called “grid.sort” and this means there is helper file called grid.php and there is a class in that file that has a method called “sort”. The remainder of the arguments are then passed to that method. Notice also, that we need to echo the result of the helper. This is designed so that you can use these helpers to build a string and then output that string in any manner you desire - that could be to the browser, or it could be to a file.
For “grid.sort”, we pass four arguments: the language key for the heading; the ordering field that this column represents; the active direction of the list ordering; and finally the active column used to order the list.
Moving on, you can see we then duplicate this header cell for all the other columns, adding the published state and category columns. Take note of the language keys used. You can, of course, use component specific keys, but Joomla 1.6 provides many common language elements for you to use.
When we come to the ordering column, you can see we have the sort heading but there is a second element. This relates to the familiar save button that saves custom ordering values in the list.
Moving on, we have headings for the view access level, the name of the record creator, the date the record was created, and the language the record was created for.
Finally, we’ve added a column for the record ID. I like to include that to assist with debugging and support even though some would argue you should hide that sort of information from the user. It’s up to you but I’ve found it very helpful over the years.
The list table footer
It always seems odd to treat the table footer next, but that’s the way HTML prefers so that’s what we do. You’ll recall the view class has a pagination property and the object assigned to this has a very useful method for displaying the pagination links on the page. We just call the getListFooter method, and Joomla’s framework does the rest.
The list table body
Finally we get to the list body where most of the data processing actually occurs.
We break into PHP and start the foreach loop that will display each of our records in a table row, and it’s no surprise it’s using the data from the view property called “items”. This is an array and we put the index of the array item into a variable called $i. We’ll use that a few times.
Next we make a few shortcuts to ordering state and permissions. We do them here mainly so that the HTML code below is a bit cleaner and more easily maintained. Once again, we are using the user object’s authorise method to test permissions on individual records. However, unlike before, you can see the second argument is changing depending on the permission. This argument is what’s called the “asset name” and this relates to an entry in a new Joomla 1.6 database table called “jos_assets”. I’ll explain assets another time, but for now I’ll just say that when we want to create a record, we check against the component category. When we want to edit a record, we check against the record itself or the category that it’s in. When we are looking at whether a person can check in or out, we check that against the “com_checkin” component. Finally, when checking if we can change the published state, we check against the record asset or the category it’s in.
Just a note - while the code shows the record as the asset for the edit and edit state permission checks, I’ve changed the snippet to reflect the more generic case using just the category. We’ll look at assets in more detail another time.
After that we need to assemble the row cells of the table to match our column headings.
The first column is the check box for the row, and again you can see we are using another JHtml grid helper to do this.
The next column does a bit of work displaying the title with an edit link, providing permissions allow for it. If the record is checked out, then the checked-out icon will display but still let you view the record. This subtle change was made to allow online support people to follow users into edit forms so they can diagnose problems more easily. We then display the alias field of the record as well as the note if present. Now, here’s where we see the escape method in the JView class being used again. Basically wherever we have user supplied data that we don’t trust, we should escape the output. The Joomla input filtering is good, but not infallible. The escape method runs the text through either the PHP htmlspecialchars or htmlentities function. Basically we use this wherever we are not expecting to display raw HTML, such as you would for the body text of content.
Then next cell uses a helper to show the published state icon.
The next cell displays the name of the category the record is in. Note again, we escape the output because we do not expect, nor want, HTML in the category title.
The next cell shows the order up and down arrows and the ordering reset box but only if the user has permission to edit the state of records. You can see that the pagination object has some helper methods for displaying the icons. Because we are using categories, we can only order up and down within records in the same category - that’s why we are testing the category id of the current records against the category id of the next record in the list. If the user does not have permission to edit the state of the record, we just echo the value of the ordering field.
The next cell displays the title of the viewing access level, and the cell after that displays the name of the author of the record.
Then next cell is using a JHtml date helper to display the formatted date.
The next cell shows the language the content has been prepared for. If the the value of the language field is a “*” character it applies to all languages, otherwise we display the actual language title.
Finally, the last cell is the record id and because we know that should be an integer, we cast it before echoing to the browser.
Closing the list table form
At the end of the file, we close the HTML form. We need to include an explicit but empty “task” field for the use of the toolbar and clickable icons in the list. The “boxchecked” field is also used by the toolbar to check whether anything in the list has been selected. The list ordering and direction fields are also added to the form. Finally, we use a JHtml form helper to add the security token to the page. This is used to prevent an attack vector called CSRF.
Previewing the final result
Jump back to the browser and refresh the page. Although it doesn’t look like it, we’ve made significant progress. You can see the table header and footer displaying but it’s rather empty because there is no data in the table yet. However, the title is still fairly ugly and this is because we haven’t defined a language key in the language file for the title.
Adding the language string to the language file
Jump back to your editor and into the view.html.php class. Find the line that has the call to JToolbarHelper::title and highlight the language string. Change it into uppercase and copy it. Now find and open the en-GB.com_hello.ini file in the component’s language folder. Paste the language key in and then assign it a suitable value that reflects the name of the page the user will be looking at.
End of part 3
Save the language file and jump back to the browser and refresh one more time. The title of the page now looks great. But, before we dive into actually creating a new records, we’ll spend a few lessons adding more structure to the view. In the next lesson we’ll add some filtering options for search text and several drop-down list. See you back real soon.
|
OPCFW_CODE
|
Scrum of scrums/2014-04-09
- Whoa, a release. Tomorrow, barring disaster. mw.org will have MMV turned on by default, preference exists for opt-out.
- Looking to ops about performance issues still, but nothing terribly blocking for our pilots at least for the next two-ish weeks
- VE: OOJS issue blocking IE<=8 support, plz halp, see #97
- James Forrester has a patch up for review: https://gerrit.wikimedia.org/r/124360
- Fun high-bandwidth conversations in-office about secret breaking changes (and bugs) in OOJS(UI) last week...need a better place for this maybe (announce list?) (maybe I'm just not on it?)
- Wikitech sounds good for this
- Nothing special, really
- #80 - working on search
- Should check with Max about ES
- VE: maintenance for switching edit mode automated test
- Hiring: reviewing candidate tech task
- browsertests: maintenance for small changes to preferences
- turning back on fatals monitor in beta labs post-EQIAD
- lots and lots of MobileFrontend
- Most of Ops team in Athens at Ops Meetup
- In case you haven't heard, there are 3 recent ops hires, say hi!
- Chase (rush)
- Giuseppe (_joe_)
- Filipo (godog)
- Heartbleed vulnerability
- will reset sessions soon, see emails
- stat1 -> stat1003 migration
- most users are migrated over.
- Will turn on base::firewall on stat1003 on Friday.
- still waiting on a few users to respond about bastion access.
- Nik upgraded to 1.1.0 yesterday
- otto to reformat all 16 production elasticsearch nodes soon.
- Largeish deployment tomorrow, VE might break, but then that's not very different from most weeks :)
- Ongoing collaboration with mobile and Parsoid
- need to work with VE on how inspectors work (#100)
- Volley'ing card 82 back to growth :)
- Oops about deploys
- heartbleed session tokens reset
- QUESTION: what are the action items for platfrom re Zero portal???
Partners Engineering / Wikipedia ZeroEdit
- #57 Firefox OS: no progress, but not blocked on RL
- #76 Portal: Yuri's continuing work, will eventually wire up Varnish to use it as the config web-based JSON backend
- #2 ESI headers: blocked until upgrade
- #94 Firefox OS zero integration: please close card
- #95 Analytics: blocked
- #96 More X-Analytics field: stalled, will come after other work
- gerrit / jenkins integration fixes underway
- gerrit fixed after it was taken down by a labs bot
- Stefan has left the team
- Limn change made to make maps easier
- Continued work on GettingStarted, GuidedTour, and mw.cookie (review in progress on this). I also got some reviews on the deletion ID logging patch. I'll follow up on that.
- skinStyles patch: https://wikimedia.mingle.thoughtworks.com/projects/scrum_of_scrums/cards/99 / https://gerrit.wikimedia.org/r/#/c/122838/
- integrating handlebars/lightncandy templating, still talking to Matt Walker/Gabriel Wicke about Knockoff (so close!), not ready for security review
- need Mobile's blob ResourceLoader, Timo -1d the core patch for it https://gerrit.wikimedia.org/r/#/c/111250/
- patches for Parsoid CSS (useful to VE) in gerrit https://gerrit.wikimedia.org/r/#/c/124785/
|
OPCFW_CODE
|
Per Rocky's advice, my business objects have public string properties for EffDate and ObsDate (EffectiveDate and ObsoleteDate) although the private variables are SmartDates.
I'm trying to filter on these values using ObjectListView and having a bit of trouble.
My Filter expression is:
EffDate >= #1/1/2002# AND EffDate <= #12/31/2004#
I get this error message:
Cannot perform '>=' operation on System.String and System.DateTime.
How are other folks handling this? I'm looking for best practice here. :)
Now I'm having a problem when the date is null (which means the string representation = String.Empty).
As soon as the filter runs into an empty date it barfs with this message:
An exception of type 'System.FormatException' occurred in mscorlib.dll but was not handled in user code
Additional information: String was not recognized as a valid DateTime.
I'll just mention that the programmer responsible for that message in DataView should have supplied the offending value as part of the error message... :(
I've tried testing for null, but it doesn't seem to short-circuit the exception processing to do so. :(
Here's my expression:
IIF(ISNULL(OBSDATE,'Y') = 'Y',#01/01/1901#,CONVERT(ObsDate,System.DateTime)) >= #"+ filterObsDtSt + "#"
filterObsDtSt has a value of "1/1/1901"
ObsDate has a value of "".
I don't want to be the guy to go against Rocky's advice - especially with his own framework! - but why are you working with your dates as strings? Why not just use the DateTime value? Then your empty date will either be DateTime.MinValue or DateTime.MaxValue, which you should be able to work with without all the string futzing.
The old IIF function evaluates both parts! It will always crash. It does not do what you think it does.
I believe a new IIF function was added in VB9 that should do what you want. You are better off writing the 3 lines of code to do it the right way anyway.
You can expose your properties as Dates too. Sometimes you want to display the string for binding but sort on the real underlying date.
There are several reasons I'm exposing my date properties as strings instead of SmartDates or DateTimes.
Reasons against using DateTime:
Reasons against SmartDate:
Reasons against String:
So far, to me, it's seemed like a Monty Hall gameshow where all three doors had glasses of hemlock as the prize. :(
I finally got it to work. Had to turn the IIF and CONVERT functions inside out. Here's the first part of the test:
CONVERT(IIF(LEN(ObsDate) = 0,'01/01/1901',ObsDate),'System.DateTime') >= #"+ filterObsDtSt + "#"
I solved the sorting problem some time ago, thanks to an earlier post of yours. :)
Copyright (c) Marimer LLC
|
OPCFW_CODE
|
why is the message showed "To view the content on this page,Please click here to install JAVA"?
we are developing an JavaFX application.
i'd like to start this application by web start throu WebSite, but the message("To view the content on this page,please click here to install Java") is showed.
So, i cannot start this application. what should i do??
could you give us any hints!!
FYI,we can start the application using NetBeans(debugging mode & normal mode).
of course, we checked that Java was installed.
(Application Development Environment)
OS:Windows7 64bit ver6.1
Browser:IE11
JDK:JDK1.7.0_40
Language:JavaFX2.2.40
Tool:NetBeans IDE7.3.1
Follow the instructions and install JRE. It looks like JRE is missing on the machine.
Internet Explorer 11 is not a supported browser for JavaFX 2, so Oracle haven't validated that JavaFX will work with it.
Some browsers run in 32 bit mode, so they don't work with 64 bit Java, perhaps this is the issue here, or perhaps Oracle don't have a compatible 64 bit plugin for Internet Explorer 11.
Additional issues are the JavaFX deployment toolkit might not understand the Internet Explorer 11 user agent string (as detailed in JavaFx web not working with Internet Explorer 11 with JRE7).
There are further issues detailed and potential fixes for them detailed in the answers and comments on Can not run Java Applets in Internet Explorer 11 using JRE 7u51.
JDK1.7.0_40 is not the most recent version of Java. If you must run Java in a browser, always require the most recent version, otherwise you open your clients up to potential security compromises. Additionally, earlier Java versions are probably less likely to be compatible with later browser versions.
Hi,jewelsea. Thank you for your corporation.
We need to learn more about JavaFX.
After check your advice, we conciderd that this app worked on Chrome.
So we try to use otehr browser. thanks.
This messgae is shown when JRE is not installed or multiple versions of JRE is installed in the same machine. Please check your java version in the console. Try to uninstall the JDK(s) installed in your machine and reinstall the latest !
@hiro hope it helps !
hi,ItachiUchiha. thank you for your answer.
however,unfortunatelly,we already have checked that JRE is installed and not multiple vrsion of JRE.
and JRE Version of WebServer and client are same versions...
|
STACK_EXCHANGE
|
Load Additive and Proximity Checkers Not Working Together.
Unity 2018.3.8 / Mirror 2184
Repro Project Attached
Add both scenes from _Scenes folder to build settings.
Build & Run as LAN Server
Open MainScene & click Play as LAN client
WASDQE to move and turn
There's a trigger zone in the middle of the Main Scene that's Server Only with a Zone Handler script that fires a TargetRpc to player that enters the zone to load the sub scene. There's a ZoneVisualizer with a semi-transparent material so you can see where it should trigger.
There are 4 networked objects with proximity checkers in the corners of the Main scene...they have semi-transparent spheres showing their ranges. These work correctly if you move the player toward them.
1st Bug There are the same 4 networked objects with proximity checkers in the middle of the SubScene. These should be hidden when the SubScene loads until player gets close to them...instead they all are shown immediately when the SubScene is loaded.
2nd Bug If you run LAN server in editor and LAN client as built, the SubScene content never appears in the client, despite log messages that say it's loaded, even when player gets to the scene center where it'd be right among the cluster of prefabs there.
MirrorAdditiveTest.zip
@MrGadget1024 can you try with latest master again? fixed another sceneid issue today.
No change in behavior with latest master this morning
A while back when Paul fixed Additive the last time, I had to call NetworkServer.SpawnObjects() after loading additive scenes on the server, and I had to call ClientScene.PrepareToSpawnSceneObjects() on the client after it loaded an additive scene.
NetworkServer.SpawnObjects() throws a flood of errors so I commented that out in the server. The additive scene on the server appears to load up fine.
ClientScene.PrepareToSpawnSceneObjects() doesn't seem to make any difference on the client. The scene is loaded additive when it should, but everything is immediately visible even though I'm out of range of the proximity checkers of the sub-scene objects.
Updated repro to latest Mirror
taking a look at this now. last open bug, here we go..
going to share my findings here along the way.
you can call LoadSceneAsync in OnStartServer directly, no couroutine / yield magic needed
same for UnloadScenes
OfflineScene & OfflineGUI seems to be unnecessary. please avoid that in next bug report :)
SceneLoader.cs simplified:
please try to keep it as simple as possible next time. this takes a lot of time to dig through and understand
looks like LoadSceneAsync doesn't trigger NetworkScenePostProcess, which is why the additively loaded scene objects don't contains the scene hash. that's a problem.
that is also the reason why they aren't being disabled on load
onpostprocesscene is actually called. but SceneManager.GetActiveScene().name is MainScene each time because the additively loaded scene isn't the main scene. hmm
host mode of the example seems to load subscene twice when walking into it, because server already loaded it in onstartserver once. will avoid host mode
got it working in my tests @MrGadget1024 .
we will need two fixes:
OnPostProcessScene needs to include all scenes, not just getactivescene (but still exclude dontdestroyonload)
NetworkManager needs SceneManager.sceneLoaded += OnSceneLoa
ded:
this definitely works. I will add this to master after some more tests. afterwards we'll have to figure out a more consistent way for PrepareToSpawnSceneObjects and SpawnObjects. ideally one way that works in all cases.
fixed in master. thanks for repro project mrgadget.
:tada: This issue has been resolved in version 1.4.1 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
Please retest with repro attached to this comment
Build and run as LAN Server
run in editor as LAN Client
move toward the tall spire until the subscene loads and then all subscene small objects are visible
move out of the subscene area in the middle so that it unloads
move back into the subscene area so that it loads again
observe errors in console
MirrorLoadAdditive.zip
:tada: This issue has been resolved in version 1.4.2 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
GITHUB_ARCHIVE
|
[W.I.P] VR tutorial cleanup and code refactor
This PR contains a number of changes I made to the project after getting it working with Godot 3.1.
Most of the changes are just code refactors that make the code a little cleaner and easier to understand. I also fixed a few bugs in the project and added some (minor) additional features.
Changes:
[x] Added rumble when objects are picked up, the guns are fired, and when the sword collides with something.
[x] Added a raycast "laser sight" to the Shotgun for easier aiming
[x] Added a new knockback/collision-force system to the pistol, shotgun, and bomb objects. Now the closer the object is to the collision, the farther it goes.
This change also makes the bombs a little more consistent and natural looking.
[x] Added some code to silence the GDScript code warnings. Nothing major was silenced, with most of the warnings being due to not using the return value from the connect function.
[x] Created a base class, VR_Interactable_Rigidbody, to use for all interactable Rigidbodies in the game.
This makes the code a little easier and should allow for defining a consistent interface for intractable VR objects. Right now the class is rather bare, but in a future PR I might expand upon it, moving some of the object related code out of the VR controller script and into this class.
[x] Changed the code within the VR controller to use VR_Interactable_Rigidbody instead of checking for methods.
[x] Changed the name of the project to "Godot OpenVR FPS"
[x] Changed the Vignette shader strength to a uniform so the strength can be changed without having to edit the shader.
[x] Refactored the VR controller class.
The code has been broken down into smaller functions. The code itself didn't really change any, but by breaking it down into smaller, more compact functions, it should be easier to made adjustments in the future. Breaking everything down into smaller functions should also make writing/updating the tutorial a lot easier too.
[x] Refactored the sword code and scene
Now it uses a KinematicBody for collision detection. While this may impact performance slightly, as the collision detection is done every frame, the entire thing feels WAY better and the code is simpler.
[x] Updated all of the comments within the code. Now they are accurate and reflect the changes within this PR.
The comments also have better explanations and I fixed some code style inconsistencies.
[x] Removed the glow effect from the WorldEnvironment. While it made a minor difference to the visuals, removing it should make performance a little better while also making the project more compatible with GLES2.
[x] Fixed issue where the shotgun raycasts would send targets towards the player. Now when the shotgun fires, objects hit fly away from the end of the shotgun.
[x] Fixed issue where the meshes used for teleportation and raycast-grabbing in the VR controller were initially visible. Now the meshes are only visible when the player starts teleporting and/or changes grab modes.
I think the list above contains most, if not all, of the changes. I made most of the initial changes late at night, so I may have missed a few. Also, some of the changes are made by Godot when opening the TSCN files.
If there are other small additional things anyone can think of, please let me know either by commenting in this PR and/or by making a GitHub issue. I'll update this post as additional changes are made
Okay! Now all that is left is updating the tutorial on the Godot documentation repository. Once that is done and this PR is merged, the VR starter tutorial is fully ready and updated for Godot 3.1.
Right now I'm leaving the PR in case I bugs are found and/or I need to make minor changes as I update the tutorial.
Sorry for the late reply, finally got some time to play around with this. It all works on my index for the most part :)
Understandably with the Vive you have a grab action and drop action, but with both the index knuckles and oculus touch controllers I would add an option so you need to keep the button pressed to hold an object.
I also found that I had to have my hands on an object to grab it instead of being in range. A more physical interaction but not always handy if you lack full roomscale tracking.
But looks like all your changes work fine. I need to play more and start seeing what I can tweak/improve :)
I'm going to see if I can make this work on the quest this week :)
ps, I'm happy to merge this and will do separate PRs as we improve things, that way we can match the PRs with any changes in documentation.
No problem, thanks for looking through it!
As for the changes, I agree that keeping the button held for grabbing would be a good idea for controllers that have more natural grabbing buttons.
For the Windows Mixed Reality controllers, which is what I use, in my opinion having grabbing where you do not need to hold the button makes it easier. I don't know for the other VR controllers out there, but I think it would probably be best to make the default grab behavior more like a toggle, where you can change it (somehow) to be where objects are only held when the grab button is held.
As far as having to have your hands on an object to grab it, that is the default grab mode. You can change the grab mode to use a Raycast by pressing the menu button on the VR controller, which makes it easier to grab objects when you lack full roomscale tracking. That said, the Raycast is rather short, so that still might be an issue.
Thanks again for looking through it! I'll merge this PR in and when the project is good to go, I'll update the PR for the Godot documentation to reflect any changes made to the VR project.
|
GITHUB_ARCHIVE
|
At my current place, we’ve set up TeamCity as our build server. Previously I’ve used CruiseControl.net and never really questioned it – it works, it does the job, seemed quite nice if a little fiddly to setup.
(If you’re thinking “Hang on, Team City isn't free is it?” - well there is a free version that is limited to a limited number of projects, configurations and agents – check the site for more details)
I was a bit sceptical about moving to TeamCity because I didn’t want to feel the pain of setting up yet another build/ci server and spending ages trying to get it working with our nant scripts – turns out I was so so wrong to worry. Team city is awesome! Let me talk you through some of the awesomeness…
Setting it up is a breeze
Its seriously easy to set a build configuration using TeamCity. Its all driven through the nice friendly web interface, so a real breeze to setup. Assuming you have an automated build script of some sort set up for your solution (like nant, nant, maven2, msbuild etc) then its really easy to plumb this into Team City. You can set up multiple build agents for your build really easily too.
So once you’ve got your build set up, you get all the usual stuff like a build history including things like which check-ins caused the build etc. You can drill into previous builds and see what happened – get the full build log, look at the tests, download the artefacts you defined for the build, all good stuff. You can also manually force a build. If a build fails you can set responsibility for fixing too. All of this is present in most (if not all) CI servers, but its just a lot more polished and feature-rich in Team City.
Really Awesome Test and Cover Integration
This is what got me really excited (maybe I should be worried about that). You get all sorts of great reports/graphs from Team City that are really useful – like this one:
This shows a history of a swear word test we have, and how long its been taking to run and on what build agent.
If you have test coverage set up (like PartCover) then you can also integrate that and you get some really nice reports showing your coverage by class, methods and LOC – you even get an indicator of what the check-in did to improve (or reduce) that coverage – which is an awesome motivator for adding more tests!
So, all in all, I really like TeamCity, and I’m glad to be using it. If you’re considering getting a CI server set up for your project, or you want to try another flavour of CI server, check it out!
|
OPCFW_CODE
|
Vapor-x R9 290X 8GB / i7 4790k @ 4.6ghz 24 GB ram I cannot figure out what is going on. This is an issue I've been dealing with for about 2 months.. I thought it was related to drivers, so I reverted back to my last good set, not the issue. I thought it was my PSU. replaced it with a new one.. not the issue I keep coming back to thinking this is a driver conflict of some sort.. But I can't figure out what is causing it. Basically what happens is, my PC will do a hard crash when gaming, sometimes watching youtube or using chrome.. I always know when it will happen, because parts of my screen will black out in a flicker, and then it usually is followed up by a crash if I could something or keep doing what I'm doing.. I can play a round of PUBG, no problem, sit in the menu, and when I start my next game, a lot of the time, I'll get a hard freeze when it switches loading screens... The PC will freeze, the sound will buzz and I'm forced to reboot. I've tried dropping my CPU over clock, tested my memory, tried no OC on the gpu, tried not using afterburner. it doesn't seem to matter.. Sometimes its totally usable for a week, and i'll sit down and use it and it freezes when I use it. (again, during full screen youtube sometimes, but more likely during a game, but never at full load, usually when something changes IE : main menu to loading screen or back to game). I did have a problem for a day where it would do the same things just after logging in. after a few DDU's and reinstalls I haven't seen that.. I also started to wonder if my span wall paper was the issue, since one screen is off the intel graphics, and the other is off the GPU, and the GPU is at 75 hz for display... that was a change I made when all this started, so I changed it back to the way it was.. I was able to play a couple rounds of PUBG.. Still had a hard freeze though, but it happened when I hit tab to bring up my inventory.. same type of freeze, everything locks up, buzzing sound. I'm running out of ideas.. I don't want to throw money at this. the card isn't overheating I changed thermal paste in july and it runs no higher than 73 degrees. The fact that I can game with it and the freezes are random, or from silly things like bringing up inventory has me believing something else is going on....... I also have wattman disabled... to add.. I also read Icue had caused problems like this (corsair keyboard/mouse software). i tried removing that and the issue persisted..
|
OPCFW_CODE
|
// WaveletTree implementation.
// Created by dsisejkovic on 10.01.16..
//
#include "WaveletTree.h"
void checkIfLetterAllowed(char letter, char *alphabet, int alphabet_len) {
bool allowed = FALSE;
for(int i=0;i< alphabet_len;++i) {
if (alphabet[i] == letter) {
allowed = TRUE;
}
}
if (allowed == FALSE) {
error("Letter not in alphabet!");
}
}
bool isLeafNode(struct WaveletNode *node) {
if (node->letter != '\0') {
return TRUE;
} else {
return FALSE;
}
}
void deleteNode(struct WaveletNode *node) {
if (isLeafNode(node)) {
free((void *) node);
return;
}
deleteNode(node->left_child);
deleteNode(node->right_child);
freeBitVector(node->bit_vector);
free((void *) node);
}
void deleteTree(struct WaveletTree *tree) {
deleteNode(tree->root);
free((void *) tree);
}
bool getEncodingType(char *complete_alphabet, char letter,
int left, int right) {
int half = (right - left) / 2 + left;
return letter <= complete_alphabet[half] ? FALSE : TRUE;
}
// expects bit_vector to be defined
void encodeToBitVector(struct BitVector *bit_vector, char *data_str_part,
char *complete_alphabet, int data_str_len, int left, int right) {
if (bit_vector == NULL) {
error("BitVector must be defined before encoding.");
}
int half = (right - left) / 2 + left;
for (int i = 0; i < data_str_len; ++i) {
char letter = data_str_part[i];
// find if letter on [i] should be 0 or 1 encoded
bool encoding_value = letter <= complete_alphabet[half] ? FALSE : TRUE;
bitVecSetOnPosition(bit_vector, i, encoding_value);
}
}
struct WaveletNode *allocateWaveletNode() {
struct WaveletNode *node = (struct WaveletNode *) malloc(sizeof(struct WaveletNode));
node->letter = '\0';
return node;
}
struct WaveletNode *addNode(char *complete_alphabet, char *node_chars, int input_len, int left, int right) {
struct WaveletNode *node = allocateWaveletNode();
node->bit_vector = allocateBitVector(input_len);
encodeToBitVector(node->bit_vector, node_chars, complete_alphabet, input_len, left, right);
node->alphabet_start = left;
node->alphabet_end = right;
if (right - left == 0) {
// node contains only one type of character
// it must be a leaf node
node->letter = complete_alphabet[left];
}
else if (right - left == 1) {
// node with two characters in alphabet
// has two leafs
node->left_child = allocateWaveletNode();
node->left_child->letter = complete_alphabet[left];
node->left_child->parent = node;
node->right_child = allocateWaveletNode();
node->right_child->letter = complete_alphabet[right];
node->right_child->parent = node;
}
else {
int half = (right - left) / 2;
int middle = left + half;
int length = 0;
char *extracted_left = extractLettersByEncoding(node->bit_vector, node_chars, FALSE, &length);
node->left_child = addNode(complete_alphabet, extracted_left,length, left, middle);
node->left_child->parent = node;
char *extracted_right = extractLettersByEncoding(node->bit_vector, node_chars, TRUE, &length);
node->right_child = addNode(complete_alphabet, extracted_right, length, middle + 1, right);
node->right_child->parent = node;
free((void *) extracted_left);
free((void *) extracted_right);
}
return node;
}
struct WaveletTree *buildTree(char *input_str, int input_len, char *complete_alphabet, int alphabet_len) {
struct WaveletTree *tree = (struct WaveletTree *) malloc(sizeof(struct WaveletTree));
tree->root = addNode(complete_alphabet, input_str, input_len, 0, alphabet_len - 1);
tree->root->parent = NULL;
return tree;
}
int rankRec(struct WaveletNode *node, char *complete_alphabet,
char letter, int position) {
// find encoding of this letter on this level
bool encoding = getEncodingType(complete_alphabet, letter, node->alphabet_start, node->alphabet_end);
// calc num of times the encoding appears up to 'position'
int freq = getOccurrenceCount(node->bit_vector, position, encoding);
if ((encoding == FALSE && isLeafNode(node->left_child)) ||
(encoding == TRUE && isLeafNode(node->right_child))) {
return freq;
}
if (encoding == FALSE) {
return rankRec(node->left_child, complete_alphabet, letter, freq);
} else {
return rankRec(node->right_child, complete_alphabet, letter, freq);
}
}
int rankOp(struct WaveletTree *tree, char *complete_alphabet,
int position, char letter) {
checkIfLetterAllowed(letter, complete_alphabet, (int) strlen(complete_alphabet));
struct WaveletNode *root = tree->root;
// for root node we take position + 1
bool encoding = getEncodingType(complete_alphabet, letter, root->alphabet_start, root->alphabet_end);
// calc num of times the encoding appears up to 'position'
int freq = getOccurrenceCount(root->bit_vector, position + 1, encoding);
if (encoding == FALSE) {
return rankRec(root->left_child, complete_alphabet, letter, freq);
} else {
return rankRec(root->right_child, complete_alphabet, letter, freq);
}
}
char accessRec(struct WaveletNode *node, char *complete_alphabet, int position) {
bool encoding = bitVecGetOnPosition(node->bit_vector, position);
if (encoding == FALSE) {
if (isLeafNode(node->left_child)) {
return node->left_child->letter;
}
int freq = getOccurrenceCount(node->bit_vector, position, encoding);
return accessRec(node->left_child, complete_alphabet, freq);
} else {
if (isLeafNode(node->right_child)) {
return node->right_child->letter;
}
int freq = getOccurrenceCount(node->bit_vector, position, encoding);
return accessRec(node->right_child, complete_alphabet, freq);
}
}
char accessOp(struct WaveletTree *tree, char *complete_alphabet, int position) {
struct WaveletNode *root = tree->root;
return accessRec(root, complete_alphabet, position);
}
int selectRec(struct WaveletNode *node, char *complete_alphabet,
char letter, int nth_occurrence) {
checkIfLetterAllowed(letter, complete_alphabet, (int) strlen(complete_alphabet));
bool encoding = getEncodingType(complete_alphabet, letter, node->alphabet_start, node->alphabet_end);
int position = calcNthOccurrence(node->bit_vector, nth_occurrence, encoding);
if (node->parent == NULL) {
return position;
} else {
return selectRec(node->parent, complete_alphabet, letter, position + 1);
}
}
int selectOp(struct WaveletTree *tree, char *complete_alphabet, char letter, int nth_occurrence) {
struct WaveletNode *node = tree->root;
// find leaf node containing 'letter'
// bottom-up procedure: find starting node
while (!isLeafNode(node)) {
bool encoding = getEncodingType(complete_alphabet, letter, node->alphabet_start, node->alphabet_end);
if (encoding == FALSE) {
node = node->left_child;
} else {
node = node->right_child;
}
}
return selectRec(node->parent, complete_alphabet, letter, nth_occurrence);
}
|
STACK_EDU
|
copy directory and its contents without using xcopy or robocopy
xcopy and robocopy are not working when I try to transfer files to a remote drive, so i'm restricted to using del and copy until I can figure that out.
Here is what I have so far:
del /q y:\OPENcontrol\targetDir
for /d %%x in (y:\OPENcontrol\targetDir\*) do @rd /s /q "%%x"
copy c:\Users\CNC\share y:\OPENcontrol\targetDir
How can I copy folders and their contents without using xcopy or robocopy?
Edit: This is on a CNC machine that is transferring files to its OPENcontrol module. The code needs to work within the limitations of the OSAI controller. A good example is that do (mkdir "destination\%%i" copy "%%i\*" "destination\%%i") had to execute in two separate loop commands, one for mkdir and one for copy
you would need to use a for loop and recursively search each directory in source, create it in destination and copy it's content... something similar to for /d /r "sourcedir" %%i in (*) do (mkdir "destination\%%i" copy "%%i\*" "destination\%%i")
@double-beep I did not, I gave example batch code, there should be newline between mkdir and copy commands.
@double-beep yep. I cannot post an answer now as I am on my phone and typically do not like posting answers if I cannot test before I post.
@GerhardBarnard well, I don't think it will work. md command will make a folder with destination\full\path\to\that\\folder and even %%~ni won't work as because it might be in a subdirectory. Thinking of another solution!
@Gerhard Brnard I made some small changes and it worked: for /d /r "c:\Users\Nil\share" %%i in (*) do (mkdir "c:\Users\Nil\targetDir\%%~nxi" copy "%%i\*" "c:\Users\Nil\targetDir\%%~nxi") post your answer with the above changes and i will accept
Somehow this reminds me of an XY Problem... why should robocopy and xcopy not work? what exactly did you try using these two commands?
@Carpk solution won't work if a folder created exists in a subfolder.
certainly robocopy can copy files to/from the network. It even has the option /Z option for those files. xcopy also has /Z Copies networked files in restartable mode. This is definitely an XY problem
@phuclv @aschipfl This is on a CNC machine, there are limitations to the OPENcontrol module where I am attempting to copy these files. I even had to break the do block into two separate for loop commands.
I don't know what OPENcontrol is but the protocol is the same in robocopy and xcopy so that shouldn't be a problem. The target don't even know how you copy the files
@double-beep There is a reason I do not post answers when I am unable to test, hence the comment stating "something similar to" ;)
|
STACK_EXCHANGE
|
Steps to Defect-Free
Nine Steps to Delivering Defect-Free Software
Copyright © 1997, 1998 Terence M. Colligan
Hello. I am Terry Colligan, president of Tenberry Software, Inc. I have been a software developer for over 30 years, and have been managing software development for over 20 years. Tenberry Software (formerly Rational Systems) has a reputation for producing high-quality software and for having extremely good engineers.
Although I thought I understood the importance of quality, and took pride in the quality of the software we produced, I never believed that delivering defect-free software was possible. After all, everyone knows that all software has lots of bugs, right?
Well, no, not necessarily! Certainly, most experiences with today's software quality are not encouraging. Although few people can name even one piece of software which they use that has no bugs, defect-free software is possible to create. We know it is possible, because we're doing it.
It started with a single engineer. This engineer was consistently producing work with a defect rate more than one hundred times smaller than our other engineers. She has done so for us for over three years now. During the same time, she has produced three to five times as much code as any other engineer.
I found this so exciting that I determined to find out how she did it, and to see if we could teach our other engineers to achieve the same quality results.
I later discovered that one of my best friends, an independent consultant in the mainframe/Cobol world, has been similarly producing defect-free results on his projects.
We have developed a process to produce guaranteed defect-free software. (We are continuing to refine our process, but it works now.) To help improve general software quality, we are sharing the nine steps of our process:
1. Believe Defect-Free Software is Possible
Surprisingly, the first reaction that I get when I describe Defect-Free Software is to be told that it's just not possible. Defect-Free Software seems to be self-contradictory. Some folks even act as if "Defect-Free Software" is an attempt at computer humor.
In fact, this attitude is the biggest obstacle preventing the delivery of defect-free software! The most striking difference between the two defect-free engineers and our other engineers (including me!) is their attitude towards software defects.
The average engineer acts as though defects are inevitable. Sure, they try to write good code, but when a defect is found, it's not a surprise. No big deal, just add it to the list of bugs to fix. Bugs in other people's code are no surprise either. Because typical engineers view bugs as normal, they aren't focused on preventing them.
The defect-free engineers, on the other hand, expect their code to have no defects. When a (rare) bug is found, they are very embarrassed and horrified. When they encounter bugs in other people's code, they are disgusted. Because the defect-free engineers view a bug as a public disgrace, they are very motived to do whatever it takes to prevent all bugs.
In short, the defect-free engineers, who believe defect-free software is possible, have vastly lower defect rates than the typical engineer, who believes bugs are a natural part of programming. The defect-free engineers have a markedly higher productivity.
In software quality, you get what you believe in!
2. Think Defect-Free Software is Important
Why is defect-free software important?
If you (or your manager) don't think delivering defect-free software is important, you won't spend the effort necessary to deliver it.
3. Commit to Delivering Defect-Free Software
In the past, I was the single biggest obstacle to producing defect-free code at Tenberry. Because I didn't really believe that defect-free code was possible, I made decisions primarily focused on short schedule times.
In retrospect, virtually every decision against trying for defect-free and in favor of short schedule time was wrong and resulted in longer schedules, more bugs, more support, higher costs and smaller profits!
Making a firm commitment to defect-free code and holding to that commitment, in spite of schedule and other pressures, is absolutely necessary to producing defect-free code.
As a nice side benefit, you will see improved schedules and reduced costs!
4. Design Your Code for Simplicity and Reliability
After attitude and commitment, program design and structure have the biggest impact on defect-free code. A clean, well structured design simplifies producing reliable code. A poor design cripples the engineer, and will make it impossible to achieve defect-free code.
Each function should be precise -- it should have only one purpose. Each action or activity should be implemented in exactly one place. When programs are structured this way, the engineer can easily find the right place to make a change. In the unlikely event that a bug is discovered in testing, the engineer can go directly to the code with the defect and promptly correct it. This saves time and is the major cause of the faster schedules experienced with Defect-Free Software.
In addition to designing for clarity, it's important to keep the defect-free goal in mind. You want to choose designs that will be least likely to have bugs. In other words, avoid tricky code. Don't start to optimize code unless you are sure there is a performance problem.
5. Trace Every Line of Code When Written
To me, one of the most surprising techniques used by our defect-free engineer was the deliberate tracing in a debugger of each line of new code when it is written.
As each line of code is about to be executed, you should try to predict what the effect will be -- what data will be changed, which path a conditional will follow, etc. If you can't predict what the effect will be, then you don't understand the program you are working on -- a very dangerous situation. If you don't predict correctly, you have probably discovered a problem that should be addressed.
Tracing all new code shows:
Tracing all new code will ensure that your code will be tested and is functioning as designed -- both important characteristics of defect-free code.
6. Review Code by Programmer Peers
Peer code reviews have consistently been shown to be the single most cost-effective way of removing bugs from code. The process of explaining a new section of code to another engineer and persuading that second engineer the code is defect-free has several positive impacts:
Peer code reviews seem to work best. Code reviews done by managers or senior technical staff can have some of the same benefits, but sometimes are less effective due to the interpersonal dynamics.
7. Build Automated QA into Your Code
Obviously, to build defect-free code, you have to be able to test your code. In addition to including a testing plan/strategy into the implementation, you should design specific code to provide for full, automated testability.
The most effective testing we use is fully automated or regression testing. This is a series of fully automated tests that are run after each build of a program. The tests are designed to exercise every part of the program, and produce a success/failure report. The idea is to use the power of the computer to make sure that the program hasn't been adversely affected by a change.
If the design is well structured, most changes should not have side effects. The purpose of these automated tests is to provide insurance that the coding assumptions are valid, and that everything else still works. By making the tests completely automated, they can be run frequently and provide prompt feedback to the engineer.
If tests are run by manually testing the program, we have the chance of human error missing a problem. Manual testing is also very expensive, usually too expensive to run after every change to a program.
There are a number of commercial testing tools available which are designed to help you automate your testing, particularly in GUI environments such as Windows. Although they are no doubt better than manual testing, we have not found them to be effective, for a number of reasons. (For more details, check out our automated testing strategy.)
By building support for automated testing into your program, you can approach 100% automated testing. Without this customized, built-in testability, you will be lucky to achieve 35% automated testing, even with the best commercial QA testing tool. We recommend that you budget five percent of total engineering time to creating support for automated QA testing.
Of course, each new piece of code should have a corresponding set of tests, added at the same time as the code is added, for the automated QA suite.
In order for fully automated execution of testing to be of value, the tests that are automatically executed and checked must cover the software fully. To the extent that they don't, running the tests doesn't tell you anything about the part of your software that wasn't exercised by the testing. (This is true for all testing, whether automated or manual.)
8. Build and Test Daily
Once you have a fully automated test suite, you should run it after every build. This gives developers feedback about the changes they are making, and it gives management clear, objective feedback about the project status.
Clear, objective feedback about project status help managers make better estimates and plans. This feedback can help you identify and address problems while you still have time to do something about them. In addition, this clear, objective feedback puts managers in a better position to provide correct feedback to their managers (or shareholders). Finally, this objective feedback helps managers decide when a project can be shipped or deployed.
The more prompt the feedback to the programmers, the more useful it is. The shorter the time between the creation of a defect and its discovery, the easier it is for the programmer to understand just what they have done wrong. Prompt feedback of failing tests can work as a kind of positive reinforcement for development techniques that work and negative reinforcement for techniques that don't.
By automating the build process as well, you can schedule builds of your system daily. By building daily, you will maximize the feedback to both your programmers and your management.
9. Use Automated Checking Wherever Possible
There are a lot of existing tools that can be used to find errors in your code in an automatic or semiautomatic manner. Your programmers should be using these tools wherever possible.
These tools should be used in addition to the clean design, rather than instead of. No matter how much you use automated checking tools, using these tools alone will never turn poorly designed, buggy code into defect-free code. You can however, find a lot of bugs that would otherwise take much more time and effort to find and fix.
Useful automated checking tools include:
These kinds of tools are important, particularly for catching the kinds of errors that don't have obvious symptoms, such as memory leaks.
That's the overview of how we create defect-free software. Obviously, there is a lot of work involved. There are also lots of details that will need to be adapted to your specific situation.
Applying these defect-free methods to an existing program will be worthwhile as well. Although it's harder to achieve a totally defect-free result with existing code (usually due to the design), applying these steps will result in a significant reduction in an existing program's defect rates.
You can deliver defect-free software -- all you have to do is demand it. By following these steps and working constantly towards the defect-free goal, you will see more and more of your software become defect-free.
Tenberry helps companies deliver defect-free software in two ways: We produce defect-free software under contract, and we consult with companies to help them produce their own defect-free software processes.
To discuss Tenberry's Defect-Free implementation of your software project or to start your own process towards Delivering Defect-Free Software, contact our sales department at the address below. You'll be glad you did!
Tenberry Software, Inc.
P.O.Box 20050, Fountain Hills, AZ 85269, USA
|
OPCFW_CODE
|
[Feature] Support DDOD: Disentangle Your Dense Object Detector(ACM MM2021 oral)
Disentangle Your Dense Object Detector
https://arxiv.org/pdf/2107.02963.pdf
Introduction
Deep learning-based dense object detectors have achieved great success in the past few years and have been applied to numerous multimedia applications such as video understanding. However, the current training pipeline for dense detectors is compromised to lots of conjunctions that may not hold. In this paper, we investigate three such important conjunctions: 1) only samples assigned as positive in classification head are used to train the regression head; 2) classification and regression share the same input feature and computational fields defined by the parallel head architecture; and 3) samples distributed in different feature pyramid layers are treated equally when computing the loss. We first carry out a series of pilot experiments to show disentangling such conjunctions can lead to persistent performance improvement. Then, based on these findings, we propose Disentangled Dense Object Detector(DDOD), in which simple and effective disentanglement mechanisms are designed and integrated into the current state-of-the-art dense object detectors. Extensive experiments on MS COCO benchmark show that our approach can lead to 2.0 mAP, 2.4 mAP and 2.2 mAP absolute improvements on RetinaNet, FCOS, and ATSS baselines with negligible extra overhead. Notably, our best model reaches 55.0 mAP on the COCO test-dev set and 93.5 AP on the hard subset of WIDER FACE, achieving new state-of-the-art performance on these two competitive benchmarks.
Results and Models
Model
Backbone
Lr Schd
box mAP
AP50
AP75
APs
APm
APl
ATSS(IoU)
ResNet50
1x
39.4
56.6
42.6
23.9
42.5
49.6
DDOD
ResNet50
1x
41.6
59.9
45.2
23.9
44.9
54.4
DDOD-FCOS
ResNet50
1x
41.6
59.9
45.3
24.0
44.6
54.8
Modification
we aim to add the DDOD model for mmdet.
Checklist
configs/ddod/ddod_r50_1x_coco.py
configs/ddod/ddod_r50_1x_fcos_coco.py
mmdet/models/dense_heads/ddod_fcos_head.py
mmdet/models/dense_heads/ddod_head.py
@Irvingao Thank you very much for your contribution. We will review the code soon.
@Irvingao Is the current accuracy all aligned?
@Irvingao Is the current accuracy all aligned?
yes.
The code should be cleaned before detailed review.
I believe the cleaning of the following commit is a good starting point.
https://github.com/shinya7y/UniverseNet/commit/173605e919c966c4d90a1030b71cc375023a90a8
@Irvingao Is the current accuracy all aligned?
yes.
The AP values in configs/ddod/README.md are the same as the official README.md.
What are the AP values of the models you trained?
@Irvingao Please develop based on the latest dev code. And retrain based on the code you submitted to see if the training and test accuracy are aligned?
@hhaAndroid How can I update the model to https://download.openmmlab.com/mmdetection/xxxxand get the downloading url ?
@hhaAndroid How can I update the model to https://download.openmmlab.com/mmdetection/xxxxand get the downloading url ?
You don't need to upload for now. We'll help upload it when the PR is done and then retrain.
@hhaAndroid I change branch from master to dev, slove all mentioned problems above and submit a new PR of DDOD.
|
GITHUB_ARCHIVE
|
Will instance of shared_ptr<Base> and shared_ptr<Derived> with same raw pointer share reference count?
Let's say I have two classes, Base and Derived, where Derived inherits from Base. Now, let's say I execute the following code:
shared_ptr<Derived> derivedPtr = make_shared<Derived>();
shared_ptr<Base> basePtr = derivedPtr;
Will the copying of derivedPtr to basePtr result in derivedPtr's reference count being updated (so that derivedPtr.use_count() and basePtr.use_count() equal 2)? Or, since the two instances of shared_ptr are different types, will the two have a separate reference count that isn't shared (so that derivedPtr.use_count() and basePtr.use_count() equal 1)?
Barring bugs in the standard library implementation (very rare), this should be pretty easy to test. Have you tried it? What does your attempt tell you?
Yes. You can even do: struct Foo { double x; double y; }; auto p = make_shared<Foo>(5.0, 10.0); auto py = weak_ptr(shared_ptr(p, &p->y);) ... pseudo-code, since I'm not at my computer right now.
Just tested this now and indeed, the reference count is updated.
So shared_ptr is more than just a pointer and a reference count.
It is a pointer and a pointer to a control block. That control block contains a strong count, a weak count, and a destruction function.
There are 3 ways to construct a shared_ptr.
First, you can construct it from a raw pointer. When that happens, it allocates a control block and sticks a "destroyer" function into it to destroy the raw pointer memory (delete t;).
Second, you can use make_shared. This allocates one block with space for both the control block and the object in it. It then sets the destroyer up to just destroy the object, and not recycle the memory. The destructor of the control block cleans up both memory allocations.
Third, there is the aliasing constructors. These share control blocks (and hence destruction code), but have a different object pointer.
The most common aliasing constructor is the one that creates a pointer-to-base, which you are doing above. The pointer-to-base differs from the shared ptr you created it from, but the control block remains the same. So whenever the control block hits 0 strong reference counts, it destroys the object as its original derived object.
The rarer one can be used to return shared pointers to member variables, like this:
struct Bob {
int x;
};
auto pBob = std::make_shared<Bob>();
pBob->x = 7;
auto pInt = std::shared_ptr<int>( pBob, &(pBob->x) );
now pInt is a pointer to pBob->x that shares the reference counting of the Bob created 2 lines above (where we made pBob).
pBob = {};
now the last pointer to the Bob is gone, but the object survives, kept alive by the pInt's control block (and strong count) ownership.
Then when we:
pInt = {};
finally the Bob is deallocated.
The cast-to-base implicit conversion you did in your question is just a variation of this.
This second aliasing constructor can also be used to do extremely strange things, but that is another topic.
shared/weak ptr is one of those cases where it seems you can just "monkey code" it without understanding it, but in my experience using shared ownership is sufficiently hard that fully understanding shared ptr is (a) easier than getting shared ownership right, and (b) makes getting shared ownership right easier.
This was a great explanation. Thank you!
|
STACK_EXCHANGE
|
QuillBot can help you make your book or website content sound more natural and precise, no matter what genre it is. Its built-in thesaurus offers synonyms based on relevance. This makes your sentences sound more natural, while the built-in grammar checker helps you fix errors. You can also use the formal mode if your content is more formal. The free version has a 700-character limit for each check.
Quillbot’s free version has limited writing modes. There are two options: Fluency and Standard. The former is great for basic writing, while the latter is for more complex content. The software integrates with Microsoft Office, Google Docs, and Google Chrome. Developers can also use it to create custom software. This AI-powered writing software will summarize, rewrite, and edit articles. But it won’t make the writing process as quick as you’d like it to be.
One of the main features of QuillBot is its simplicity. It is very simple to use and has an intuitive interface. You simply copy the text you want paraphrased onto the left side of the screen, and Quillbot will do the rest. It is available as a web-based app. A mobile version is in development. It is currently in beta. Until then, try out the free version and decide for yourself! It’s a great tool for your writing needs.
It uses Natural Language Processing to understand text and extract the information you need. Once you’ve extracted all the necessary information, you can export it to Word or copy and paste it into your editor. You can also get citations in various formats using Quillbot. The program provides an organized summary of all the information and a search bar that allows you to find books containing specific phrases. You can specify which words Quillbot should not change.
Quillbot’s key feature is its simplicity. The program works with any type of document – free or premium – and can even rewrite articles and press materials. With a wide range of features, Quillbot can be a great tool for journalists, writers, and other writers. It is also useful for journalists, business professionals, and other users. However, the main feature of Quillbot is its ability to cut down on time spent writing.
QuillBot’s other benefit is that it doesn’t require an account and users can use it even without signing up. The paid version can check up to 20 pages of content for plagiarism, while the free version is limited to 250 words. Although it may seem difficult to believe, this tool is well worth the price. It is a powerful AI that has many uses. You’ll be surprised by what it can do for you! QuillBot is a great tool for essay writing.
In addition to its multiple writing modes, QuillBot also has an official Chrome extension. Users can choose from four levels of synonyms and two writing modes. It can also auto-correct spelling and grammar. You can cancel your subscription at any time if you are not satisfied with the results. QuillBot is a great tool that you can use for a year. Or, you can choose to try the free version. It’s worth trying the free version first.
The Creative mode lets you experiment with different expressions. It tries to rephrase the input text with imaginative expressions, although the accuracy is less in this mode. Creative+ mode, on the other hand, tries to make more creative changes. Formal mode, on the other hand, rewrites content in a formal manner. This mode is best for academics, government employees, and other professionals. You can also adjust the length of your text.
QuillBot’s rewriting tool is the best feature of the app. It creates unique content by rewriting it based on user input. It uses artificial intelligence to find similar words. The software also has an inbuilt thesaurus that allows users to adjust the output. Users can select one of seven modes, each affecting the quality of the paraphrasing. The “Synonyms” slider controls how many words are replaced. The program will return the paraphrased text within seconds after the text has been rewritten.
Another notable feature of Quillbot is its plagiarism detection tool. It can detect plagiarism by comparing your text to that of another writer. By analyzing sentence structures, it can help you avoid plagiarism. Its plagiarism detection algorithm compares your writing to thousands of sources online and offline. The resulting scores reveal the percentage of writings that match your writing. You can also see any changes to your text that the software has made. The percentage changes aren’t as useful as a complete analysis of plagiarism.
|
OPCFW_CODE
|
I am editing my question as it was not detailed enough. I made an (unsuccessful) shortcut. Sorry, here is the entire story.
In my experiment I test subjects' reactions to some (simulated) situations. The subject read a scenario and then an expert evaluates the subject's behavior. The evaluation ranges from 1 to 5. There are 10 different simulations, and each subject takes all of them; thus, from each subject I have 10 data points. My experiment is went on 30 days. In each day, the same 10 simulations are used. In other words, in each day, subjects and simulations are fully crossed. Each day the simulations are different.
There are 3 categories of simulations (A, B and C). The categories are, from a theoretical perspective, different one from the other. Category A is tested by 3 simulations (a1, a2, a3); B by 3 (b1, b2, b3); C by 4 (c1, c2, c3, c4). a1:c4 at day 1 are different from a1:c4 at day 2 and so on. Importantly, participants took the experiment one time only and are not allowed to participate again. That is, if participant participates on, let say day 1, he will take the 10 simulations that were on day 1. But he could never participate again.
The 3 categories are the only ones I am interested in. In that sense, I think they should be treated as fixed effects. Each category is tested/represented by some simulations. Yet, for each category there are an infinity of possible simulations and I just sampled some. In that sense simulations are random.
The only question I am interested in here, is about the effect of the subject gender on the grades. I want to control for all other parameters. My question is how to account for the simulation and category. I would like to extrapolate my results beyond participants and the simulations representing the category. Yet it is also possible that gender would interact with category or simulation.
So here is one "basic" model:
lmer(grade ~ gender + (1|subject) + (1|simulation:day), data = My_data)
Yet, this model does not account for the possibility that gender has a different effect on simulations. So here is another one trying that.
lmer(grade ~ gender + (1|subject) + (1 + gender|simulation:day), data = My_data)
And here I get stuck. How does category play a role? Do I need to enter it as fixed effect? If yes, what about simulations? Does the following make sense?
lmer(grade ~ gender*category + (1 + category|subject) + (1 + gender|simulation:day), data = My_data)
Or is it better to give up the simulations, and treat category as random? But in this case, for a given day, the same category will appear several time for each subject (e.g., A will appear 3 times). Is't that a problem? As follow:
lmer(grade ~ gender + (1 + subject) + (1 + gender|category), data =My_data)
A final point: I have a lot of data (several thousands of participants), so convergence should not be a problem.
Thanks a lot for the help
|
OPCFW_CODE
|
Stax event reader skipping white space
I'm writing a utility to alter text entities within an XML file, using the STAX event model. I've found that the some of the white space in the source document isn't being copied to the output. I wrote this sample program:
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.charset.StandardCharsets;
import javax.xml.stream.*;
import javax.xml.stream.events.*;
public class EventCopy {
private static final String INPUT =
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" +
"<foo><bar>baz</bar></foo>\n";
public static void main(String[] args) throws XMLStreamException, IOException {
InputStream reader = new ByteArrayInputStream(INPUT.getBytes(StandardCharsets.UTF_8));
OutputStream writer = new ByteArrayOutputStream();
XMLInputFactory input = XMLInputFactory.newInstance();
XMLEventReader xmlReader = input.createXMLEventReader(reader, "UTF-8");
try {
XMLOutputFactory output = XMLOutputFactory.newInstance();
XMLEventWriter xmlWriter = output.createXMLEventWriter(writer, "UTF-8");
try {
while (xmlReader.hasNext()) {
XMLEvent event = xmlReader.nextEvent();
System.out.print(event.getEventType() + ",");
xmlWriter.add(event);
}
} finally {
xmlWriter.close();
}
} finally {
xmlReader.close();
}
System.out.println("\n[" + writer.toString() + "]");
}
}
Using the default Stax implementation that comes with Oracle Java 7, this outputs:
7,1,1,4,2,2,8,
[<?xml version="1.0" encoding="UTF-8"?><foo><bar>baz</bar></foo>]
The newlines following the XML prolog and at the end of the input have disappeared. It seems the reader doesn't even generate events for them.
I thought that maybe the XML reader was leaving the input stream positioned at the end of the last XML tag, and tried adding code to copy trailing characters from the input to the output:
...
} finally {
xmlReader.close();
}
int ii;
while (-1 != (ii = reader.read())) {
writer.write(ii);
}
But this doesn't have any effect.
Is there a way to get STAX to copy this XML more faithfully? Would a different STAX implementation behave differently here?
try using "" instead of "\n"
Reference: XML spec
A well-formed XML document follows the specification grammar:
[1] document ::= prolog element Misc*
[22] prolog ::= XMLDecl? Misc* (doctypedecl Misc*)?
[23] XMLDecl ::= '<?xml' VersionInfo EncodingDecl? SDDecl? S? '?>'
[27] Misc ::= Comment | PI | S
[3] S ::= (#x20 | #x9 | #xD | #xA)+
[39] element ::= EmptyElemTag
| STag content ETag
[40] STag ::= '<' Name (S Attribute)* S? '>'
[43] content ::= CharData? ((element | Reference | CDSect | PI | Comment) CharData?)*
[14] CharData ::= [^<&]* - ([^<&]* ']]>' [^<&]*)
[42] ETag ::= '</' Name S? '>'
The line feed between XMLDecl and the root element, and the one after the root element, are just S that the parser allows itself to ignore.
Let me give an example of a different white space. Suppose you have a slightly different XML:
private static final String INPUT =
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" +
"<foo>\n<bar>baz</bar></foo>\n";
The line feed between <foo> and <bar> is a CharData. Note that StAX will properly generate an event for this character.
If you really want to preserve S, then you'll need to read INPUT as text instead of as an XML document. Note that two XML document instances, one with these two specific S characters and one without them, are equivalent.
I figured the output is semantically equivalent to the input. That's not really what I'm looking for. I'm concerned that my users will complain if this XML filter makes unnecessary changes to the XML, and I'd rather not be in the position of having to argue with them that the changes don't matter.
@Kenster i guess you are short of options. Read the XML as text, then. I believe most XML parsers will ignore those whitespaces
|
STACK_EXCHANGE
|
Cannot add int[] type to an ArrayList
I am trying to add an array of integers to an ArrayList as follows, which does not work:
ArrayList<int[]> myAL = new ArrayList<int[]>();
myAL.add({2,3});
however, adding it by reference works:
ArrayList<int[]> myAL = new ArrayList<int[]>();
int[] id = {2,3};
myAL.add(id);
I believe you can add simple integers to ArrayList without reference, so how come you can't add an array without reference.
Thanks,
{2,3} it doesn't meaning anonymous object to make it you need myAl.add(new int [] {2,3});
I Think You Can Only Add An Int Array To An ArrayList not its values
This question has nothing to do with ArrayList at all, it's just a syntax error.
You always need the to use the anonymous array syntax when declaring an integer array outside an array declaration. This syntax is described in the Java Language Specification under Array Creation Expressions and shows that the new keyword is used
ArrayCreationExpression:
new PrimitiveType DimExprs Dimsopt
new ClassOrInterfaceType DimExprs Dimsopt
new PrimitiveType Dims ArrayInitializer
new ClassOrInterfaceType Dims ArrayInitializer
That why
int[] id = {2,3}; // declaration
is valid syntax, whereas
int[] id;
id = {2,3}; // assignment - outside declaration - fails compilation
is not. Therefore it is necessary to use
myAL.add(new int[]{2,3});
Your {2, 3} is an example of an ArrayInitializer. According to the JLS:
10.6. Array Initializers
"An array initializer may be specified in a declaration (§8.3, §9.3, §14.4), or as part of an array creation expression (§15.10), to create an array and provide some initial values."
The first three cases are for declaring variables, and that's not what you are doing. The final case corresponds to what you are trying to do ... create an array instance ... but if you look at the linked section you will see that you need to use the Java new keyword to do this.
So why does the Java syntax not allow you to do this ( myAL.add({2,3}); )?
Well, I think that the primary reason is that {2, 3} is not sufficient to say what type of array should be created ... in all such contexts.
Consider this:
ArrayList myAL = new ArrayList();
myAL.add({2,3});
What kind of array is appropriate here? Should it be an int[]? Or a long[]? Or Integer[]? Or Object[]?
The other thing to remember is that array initializers were part of the Java language in Java 1.0 ... well before the Java language included generic types and limited type inferencing that might (hypothetically) allow the ambiguity to be resolved in a sensible fashion.
|
STACK_EXCHANGE
|
Cosmological perturbations and energy in an expanding universe?
I was reading an interesting book from cosmomogist Viatcheslav Mukhanov Physical Foundations of Cosmology and I had a specific question about it:
It is usually said that energy conservation is difficult to define in cosmological scales since, for example, dark energy density appears to be constant in each point of space, so its total energy increases as the universe expands.
In section problem 8.10, Mukhanov mentions that cosmological perturbations can "violate" energy conservation and be excited (therefore, gaining energy) from the Hubble flow. Also,in this article Mukhanov says:
Since the primordial fluctuations were obtained as a result of the amplification of initially Gaussian quantum fluctuations by the external classical source (they acquired energy from the Hubble expansion), the resulting gravitational potential must be described by a Gaussian
random field up to the second order corrections due to the nonlinearity of the Einstein equations
I had a question about this technical aspect:
Has this phenomenon been observed or experimentally verified? Can any types of perturbations (or anything else) actually gain energy fron the Hubble expansion?
It hasn't been directly experimentally verified. But, the predictions of inflation are reasonably consistent with observations.
You can make a formal analogy between the perturbation equations during inflation, and a harmonic oscillator with a time dependent spring constant. For example, see Section 9 of https://arxiv.org/abs/gr-qc/9909001. In the latter case, there is a well known method (the "Bogoliubov transformation") that lets you compute particle creation due to the time dependent Hamiltonian. Then, the energy in the perturbations are extracted from the external driving force causing the spring constant to change with time.
The concept of energy in GR is murky though, and I would treat Mukhanov's statements as an analogy that works at the level of the equations for the linearized perturbations. In particular, there is no external driving force acting on an expanding Universe, or explicit time dependence in the field equations.
but has there been any observations or experiments that verified these kind of phenomena? Have we seen any system that actually gains energy and "gets excited" from expansion? @Andrew
@vengaq Depends on what you mean. Given that inflation happened, the equations for the perturbations take the form of a harmonic oscillator with a time dependent spring constant. This is also called the Mathieu equation and is well known and studied. Among other applications, there are analog systems that implement particle creation type phenomena. If your question is whether or not inflation happened, that's up in the air. The observational evidence is consistent with inflation, but not conclusive.
For what it's worth, I don't think the question of whether there are observations or experiments is really the right question here, actually. If inflation is right and the equations in the review I linked are correct or approximately correct, then particle creation occurred. The question is whether inflation really happened. Directly looking evidence of this specific effect might not be the most effective way to establish whether inflation happened observationally.
|
STACK_EXCHANGE
|
Please tell am is it correct logic for Navigation?
I am fairly new to ASP. Net 2.0, background MS Access 2000/2003 and
working knowledge of SQL 2000.
Blank Page loads with 2 Text Boxes (TxtLast, TxtFirst), a Command
Button (CmdSearch) and a Gridview to Display result Data (4 Columns:
Select but Text is: Show Details, Unique Record ID, Last Name and First Name).
users enter Data in TxtLast, TxtFirst and Click CmdSearch.
If Records are found, matching results are displayed in Gridview
(Based on Search Criteria). There may be multiple Names in Gridview.
Users then Click Select (Show Details) in Gridview. Page 2 is then
Invoked with Unique Record as Query String Parameter.
A FormView with Detail Information(i.e. Phone#, Address, City, State
etc.). There users can edit, Update each Detail Record. Outside
Gridview, I have a Command Button (CmdGoPage1) to return to Page1 after users are done with Page2. I am using Response.Redirect "Page1") for CmdGoPage1_Click event because that's what all I know with my limited knowledge.
All these logic works fine except when Page1 is reloaded after Response redirect from Page2, all Data in Gridview1 of Page1 are lost, also TxtLast and TxtFirst Text Boxes are null (The same State as it was loaded very first time).
Now users have to re-search again in Page1 by using TxtLast and
TxtFirst to retrieve the original results. That's not what is my
intention. I would like to see after Page1 is post backed from Page2, it should not loose any values in Gridview. If users try to find another name, only then they need to re-search in Page1.
Now, I came up with a Solution (I do not like but I got no choice) by Using Multiview with 2 Separate Views (for Page1 and Page2). They work fine, I do not loose values between Page Navigations but I have to put all Codes of Page1 and Page2 in a single huge Page. I
noticed it will be cumbersome to debug as I add more pages in my
Project (I still have to add 4 more pages that are all dependant of
Page 1). Also I really do not know what will be the overall
My question is that am I using right logic by using MultiView? Or is there any way I can accomplish the same result by using separate Pages. But I still need navigation bars to Navigate between Pages. I did this type of developments easily in MS Access by using Tab Strips/Pages and subforms. Can I use MS Access like logics in ASP.net to accomplish the same result?
Any suggestion or a guide (Example, URL or Book Name) for this issue
will be highly appreciated...
|
OPCFW_CODE
|
As per the International Standard ISO 8601, different countries has a different date and time format. It affects the first day of the week also. For example, in the USA and Canada, the First day of the week is Sunday, whereas Monday is the first day of the week in the UK and Australia.
You need to customize these settings in Microsoft Teams to view the calendar and your schedules properly. For example, the work week starts on Sunday and ends on Thursday in the Middle East, where I live. So, I used to get few complaints about the calendar view in Microsoft Teams since these are not normal settings in most products.
In this post, let me show you how to change the first day of the week or work week in Microsoft Teams. If these changes are not taking effect, then you need to verify few more settings in some other places. I will cover them as well.
How to Change the First Day of the Work Week in Microsoft Teams
In this example, we will take my case to set up Sunday to Thursday as the work week. As this slightly matches the ISO 8601 standard to USA settings (The first day is Sunday), we need to ensure the App Language in MS Teams is properly configured.
App Language in Microsoft Teams is the first place you need to look for changing the first day of the week.
1) Open MS Teams and go to settings. With the latest upgrade, you need to click the 3 dots to open the settings, not the profile picture.
2) In the General setting, find out the Language settings. You need to change the App language based on your requirements.
Currently, in my Teams, the week is starting on Monday because the App language was set to the United Kingdom. Since I need to make Sunday as the First day of the week, I need to change the language to the United States.
3) Do not forget to press ‘Save and restart’. MS Teams will close and start to take this effect.
With the above settings, mostly the first day of the week will change.
Work Week in MS Outlook
If you need to change the first and last day of the work week, you need to change the settings in Outlook on the same PC where MS Teams and Outlook logged in with the same user account.
Also, you can use the Outlook online version like Office 365 or personal Outlook (outlook.live.com) in the web browser to change these settings. I’m showing from the Outlook client program, which will suit most business users.
In this example, my work week ends on Thursday.
4) Open MS Outlook and Options from the File Menu.
5) In Calendar options, change the start and end days of your work week. For example, it is Sunday to Thursday in the middle east. It is better to set your work timing also.
Press OK to save the settings.
To take this effect, you may need to wait for some time, then restart the MS Teams. Since Teams picks up this data from Outlook, it needs to be restarted.
If that doesn’t work, restart both applications or even the computer to take effect.
Sometimes, both methods will not help to solve the issue that you want to achieve. Even though those 2 steps are must-do to change the first day of the week in Microsoft Teams, there is another place you need to verify language settings.
For example, if you change the App language to the United States in MS Teams, but your Windows 10 or Windows 11 default regional language is still in the United Kingdom, it will confuse MS Teams. The system default Language should match the same language in MS Teams to affect the first week (work week) settings.
6) Search for ‘Region’ in Windows 10/11 search and open ‘Region settings’.
Make sure the ‘Region format’ language is set to the United States if you set the same in MS Teams according to your date format requirements. If you purposely configured the UK in Teams, then the Windows regional format also should be in the UK. In my example, it should be the United States.
Once you change the language in the Region settings, you need to restart the computer to take effect. Remember, this change will be applied to all other settings and applications on your computer. Mostly, it should be fine if you are following a certain date/time format.
With all 3 steps, you must get the proper first day of the work week in Microsoft Teams.
Also, you must be aware that you can change the calendar view in MS Teams by clicking the below drop-down. It will change the normal and work week. It will help you to see your meetings and appointments in a single view easily.
For some reason, Microsoft Teams doesn’t have separate settings to change the start and end of the week. It fully depends on the Outlook and OS settings. But doing the required changes will correct the start and end of the weeks in Teams.
Do let us know how did these steps help you? Or did you find any better ways to change these formats?
|
OPCFW_CODE
|
oding and programming have become quite popular today. With many beginners investing in learning and using coding to create apps, websites, and software, most of them rarely regard small yet critical issues on security. This often leads to mistakes that may at times be costly, especially when the said errors lead to a breach of security. Here are some of the five most common security-related errors that programmers often make.
Copy-Pasting Third-Party Open-Source Codes
It is every programmer’s wish to have a program completed as fast as possible. This ideology often makes most programmers cut corners by copying and pasting free codes from the internet into their program code. Whereas it is recommendable to seek such guidance, it is essential to understand the security risks. Such codes, if uninspected, can compromise the integrity of the app by making it vulnerable to back-end hacking, especially since everyone has access to the codes.
Testing and dry-running a code and final software are significantly important as it determines whether the final product works as desired. Most coders and programmers either overlook the testing aspect or do shallow testing. A proper functional test should include the security vulnerabilities of the software regarding the infiltration that the code can sustain. Also, most coders do the testing on their own, leading to a potential bias.
Failure to Delete Testing Data and Backdoor Accounts
Testing, when done, often involves inspecting the codes line by line. It also involves creating accounts where necessary to test data inputs. Sometimes, programmers tend to forget to reset the code to eliminate the created accounts. This leaves the app significantly at risk of being compromised by the third parties whose accounts were not deleted. Credentials of such accounts are also often handled in a less secure manner, leading to a compromise, especially if such accounts were accorded administrative rights. A common mistake people make with their security is thinking that by throwing away or just conventionally deleting the information you are secure, but that isn’t the case. Any hardware that has testing data needs to be cleaned by either the tester or the facility to prevent the possibility of recovery.
Forgetting Analytics Tracking
Programmers rarely invest in developing and embedding analytic tracking codes in their programs. Analytics tracking can perform numerous functions including tracking the location of individual logins to the activities conducted in software and the timing for the same. Failure to include such tracking codes implies that a potential breach would go undetected by the app or program, something that promotes hacking.
Failure to Encrypt Sensitive Data
Data encryption should be undertaken from the back-end when coding. Whereas every programmer may understand the need for this measure, program developers rarely comprehensively encrypt every sensitive piece of data across all layers of the code. This may leave personal credentials and information at risk of being illegally accessed by hackers.
Coding requires a proper understanding of not just the programming codes but also the potential security risks to avoid them. It is every coder’s wish that a program is as secure as necessary. However, the above-outlined errors often occur when programmers are not keen.
|
OPCFW_CODE
|
Catch errors thrown in ensure block
Closes #78
If an error ever occurrs in an ensure block, then that is a serious problem. We should inform the user that this is not good by printing a huge nasty warning to the console.
However, if such a situation does occur, at least it should not leave the process tree in an inconsistent state. Other exit hooks should still run, and other processes should still shut down. Currently if such an error does occur, it will blow the stack, and cause finalization to stop
abrubtly. This is very much not good.
Here we're rescuing any such errors, printing them to the console, but crucially we are NOT re-raising them. This allows us to proceed with finalization, which will leave the process tree in a more consistent state.
It's worth noting though that such a situation really is very undesirable. The Rust language does a hard abort of the process with an error exit code if a destructor panics, we cannot do something similar in effection, since we want to run in browsers, and browsers cannot hard-exit.
A preview package of this pull request has been released to NPM with the tag catch-ensure-errors.
You can try it out by running the following command:
$ npm install effection@catch-ensure-errors
or by updating your package.json to:
{
"effection": "catch-ensure-errors"
}
Once the branch associated with this tag is deleted (usually once the PR is merged or closed), it will no longer be available. However, it currently references<EMAIL_ADDRESS>which will be available to install forever.
Generated by :no_entry_sign: dangerJS against 6ca7c79b292540d811f4cfacd2af83a6a8be09f3
A preview package of this pull request has been released to NPM with the tag catch-ensure-errors.
You can try it out by running the following command:
$ npm install effection@catch-ensure-errors
or by updating your package.json to:
{
"effection": "catch-ensure-errors"
}
Once the branch associated with this tag is deleted (usually once the PR is merged or closed), it will no longer be available. However, it currently references<EMAIL_ADDRESS>which will be available to install forever.
Generated by :no_entry_sign: dangerJS against 9dc53336b59443bcf7a9e41540c9bc5d27aca461
A preview package of this pull request has been released to NPM with the tag catch-ensure-errors.
You can try it out by running the following command:
$ npm install effection@catch-ensure-errors
or by updating your package.json to:
{
"effection": "catch-ensure-errors"
}
Once the branch associated with this tag is deleted (usually once the PR is merged or closed), it will no longer be available. However, it currently references<EMAIL_ADDRESS>which will be available to install forever.
Generated by :no_entry_sign: dangerJS against 5867178c61724fff8dcc02ebd439058f675db90f
A preview package of this pull request has been released to NPM with the tag catch-ensure-errors.
You can try it out by running the following command:
$ npm install effection@catch-ensure-errors
or by updating your package.json to:
{
"effection": "catch-ensure-errors"
}
Once the branch associated with this tag is deleted (usually once the PR is merged or closed), it will no longer be available. However, it currently references<EMAIL_ADDRESS>which will be available to install forever.
Generated by :no_entry_sign: dangerJS against 8ec127f427554f9d063064e4817eec50abba295f
@cowboyd not sure why the type tests are failing. It looks unrelated to this PR though.
@cowboyd not sure why the type tests are failing. It looks unrelated to this PR though.
I'm guessing that it's because we have our TS config pointed to esnext which is a moving target..... which moved. Maybe if we peg it to es2019 or some fixed target?
A preview package of this pull request has been released to NPM with the tag catch-ensure-errors.
You can try it out by running the following command:
$ npm install effection@catch-ensure-errors
or by updating your package.json to:
{
"effection": "catch-ensure-errors"
}
Once the branch associated with this tag is deleted (usually once the PR is merged or closed), it will no longer be available. However, it currently references<EMAIL_ADDRESS>which will be available to install forever.
Generated by :no_entry_sign: dangerJS against 1ec6b7a10226e7ceb5e6059653e3ad6c724c6954
A preview package of this pull request has been released to NPM with the tag catch-ensure-errors.
You can try it out by running the following command:
$ npm install effection@catch-ensure-errors
or by updating your package.json to:
{
"effection": "catch-ensure-errors"
}
Once the branch associated with this tag is deleted (usually once the PR is merged or closed), it will no longer be available. However, it currently references<EMAIL_ADDRESS>which will be available to install forever.
Generated by :no_entry_sign: dangerJS against 1ab9ce5bab504af111e6284d132934bb3ba9578d
:shipit:
|
GITHUB_ARCHIVE
|
December 8th, 2003, 10:23 PM
PHLAK 0.2 Released
Head on over to www.phlak.org ro grab the latest version of PHLAK.
Hope you like it.
December 9th, 2003, 12:11 AM
thanks, i was waiting for it
December 9th, 2003, 12:30 AM
Phlak is a decent tool, but in testing I have found that it doesn't run on some systems. Knoppix, on the other hand, I have seen work on many, many systems. Knoppix-STD works well but doesn't have all the updated tools. Don't worry, there is an operating system coming very soon that bridges the gap.
December 9th, 2003, 05:50 PM
hey sysmin, Im not being a douchebag here but i was just wondering what was the system it wouldnt run on? B/c i have had success with 3 totally (IMO) different systems. And BTW has anyone tried it yet on a laptop?
December 9th, 2003, 06:25 PM
Sieve and I both develop PHLAK on Laptops.
I have a Dell Inspiron 2650, works great. Sieve has a Toshiba (not sure the model) Celeron 1.3 or so. So there is laptop support.
December 9th, 2003, 06:39 PM
Any Athlon 64 Support? or Athlon 64 Laptop support?
December 15th, 2003, 06:57 PM
real quick, did you guys finally remove the morphix references?
January 5th, 2004, 05:10 PM
Any updated information on the PHLAK OS? I just recently downloaded it and may give it a whirl this weekend.
January 15th, 2004, 02:42 PM
Any idea why www.phlak.org is down?
Has been down for the last 24 hours or so at least,
any idea why?
\'Im just this guy ya know!\'
January 15th, 2004, 03:20 PM
I have a feeling the main reason that it is down is because an american tv show called Screen Savers, which happens to have a pretty good tech following, did a special and put a link on their webpage to get to phlack and download their info so either their server has been overloaded or they took it down because of too much traffic.
I did have luck a day ago with going to the direct link and then refreshing. so give this a try also
Seems to be a pretty good distro so far, I played with it a bit last night and liked what I saw.
Duct tape.....A whole lot of Duct Tape
Spyware/Adaware problem click
|
OPCFW_CODE
|
In the digital age, technology has made significant strides, enhancing every sector of human life. One such area that has seen substantial innovation is nutrition. The emergence of nutrition apps has revolutionized how individuals track their diets and maintain a healthy lifestyle.
However, these apps are not only beneficial for users but also present a creative playground for programmers. Here lies the opportunity for coders to explore, innovate, and challenge themselves in an exciting landscape.
Coding for Nutrition: Explore Your Creative Side
Programming is an art where one’s creativity is the limit. In the realm of nutrition apps, programmers can stretch their imagination to create applications that provide solutions to various dietary issues. They can develop features that help users monitor their food intake, track their nutrient consumption, suggest meal plans, and more. This diversity in programming tasks allows coders to explore their creative side.
The phrase “write my code for me” is a call to action for programmers to take on the challenge of creating nutrition apps. It is a journey that allows them to innovate and come up with features that make tracking nutrition easy and fun. The user-friendly interface, interactive features, and the ability to customize based on the users’ dietary needs and preferences are some of the aspects that programmers can experiment with while coding for nutrition apps.
Moreover, coding for nutrition apps enables programmers to expand their knowledge base. They are required to understand the science of nutrition to effectively create features that provide accurate dietary information. This multidisciplinary approach to coding enhances their problem-solving skills and makes the process more intriguing.
Unleash Your Creativity with Nutrition App Programming
Nutrition app programming is not only about writing lines of code but also about bringing innovative ideas to life. It provides the perfect platform for programmers to unleash their creativity. They can experiment with various features such as meal trackers, calorie counters, personalized diet plans, and more.
The “write my code for me” challenge pushes programmers to think out of the box. They need to consider the user’s perspective and come up with features that not only meet dietary requirements but also ensure a seamless user experience. From designing attractive interfaces to incorporating advanced algorithms that provide real-time dietary feedback, the possibilities are endless.
Furthermore, the dynamic nature of nutrition app programming allows programmers to continuously refine their skills. With changing dietary trends and user preferences, they need to update their apps regularly. This constant need for improvement encourages programmers to stay updated with the latest industry trends and technologies, thereby enhancing their coding skills.
Nutrition Apps: Where Coding Meets Creativity
In the world of nutrition apps, coding and creativity go hand in hand. Programmers are tasked with the challenge to “write my code for me” that not only works efficiently but also appeals to the users. They need to balance functionality with aesthetics, making the apps intuitive and engaging.
The creative process in nutrition app programming involves brainstorming ideas, designing interfaces, writing code, and testing the features. It requires a deep understanding of the users’ needs, the science of nutrition, and the technical aspects of coding. This fusion of different disciplines makes nutrition app programming a unique and exciting field for programmers.
Moreover, the satisfaction of creating an app that helps individuals lead healthier lives adds to the allure of nutrition app programming. It gives programmers a sense of purpose and motivates them to continually improve their skills and create better apps.
Elevate Your Coding Skills in the World of Nutrition Apps
The world of nutrition apps offers endless opportunities for programmers to elevate their coding skills. The “write my code for me” challenge encourages them to push their boundaries and come up with innovative solutions. They get to work on various aspects of app development, including front-end and back-end coding, user interface design, database management, and more.
Moreover, the continuous evolution of dietary trends and technologies keeps programmers on their toes. They need to stay abreast of the latest developments and integrate them into their apps. This constant learning and adapting process helps programmers enhance their coding skills and stay relevant in the competitive tech industry.
In conclusion, nutrition apps provide an exciting and creative playground for programmers. They offer a platform where they can explore their creativity, innovate, and enhance their coding skills. The “write my code for me” challenge is an invitation to programmers to step into this fascinating world and create apps that make nutrition tracking easy, fun, and efficient.
|
OPCFW_CODE
|
I see a hint of logic here, perhaps I can clarify something for you. You claim the following:
" "It seems the first is better, since it doesn't require us to give up anything."
So giving up an independent monetary policy 'doesn't require us to give up anything'.
Amazing, Professor Einstein. And so intelligent. "
Giving up an independent monetary policy requires us to give up an independent monetary policy... we have a saying in Bulgaria that is literally translated "wooden philosopher" to describe someone that brings up trivially useless observations into an argument, when they can clearly understand what the other person is saying.
I was considering only two alternatives, both of which include giving that up. "it doesn't require us to give up anything" means "anything extra". You can deduce this from the setup of the logical problem. Everyone is giving up something at all times, so logically "giving up something" must have the meaning of "giving up something more that you are already giving up".
You wouldn't be bringing this up if you had some idea of the history of the Bulgarian central bank. Giving up the monetary policy in Bulgaria is considered a good thing by economist, because it takes away the power of Bulgarians to print money and had it out to their friends. History has shown that Bulgarians are irresponsible with the nation's money supply.
You can praise Einstein all you want, that isn't going to help you.
What you just wrote has zero value for the human race, it is a waste of computer memory. You seem to have developed some kind of obsession with me which is not healthy.
I said I would leave you but I feel the need to address your recent burst of petty insults to the rest of the people with which I am having a discussion.
You are the annoying child that is interrupting the discussion of the adults.
I have never felt insulted, but I have pointed out when people have tried to insult me. The reason I do that is because I don't care much for that, it breeds an uncivil discussion and I don't like ad hominem attacks.
You are clearly homophobic and I'd suggest for you to leave this forum. We don't need Westerners reading these things thinking we are backwards people.
By the way Faust,
Calling someone gay is not an insult, try again.
I despise your homophobia.
If you choose to have issues with someone else's sexuality do it in the privacy of your own home with your buddies.
You are giving Bulgarians a bad name in the eyes of the civilized world. Try saying something intelligent.
Though, in some states there are laws to protect that child from such force. So I guess this comparison state vs. parents can't really be defined properly.
Needless to say the state has more power than the parents always.
I'm sorry to everyone that has to read this nonsense.
DrFaust has some sand stuck in his vagina from a previous encounter, if we could raise some money for him to see a doctor, that will be great.
"I still see a parent or peer forcing their views on a venerable child, as even worse than your example of life under an Autocracy."
Depends on what "forcing" is used. Surely the disapproval and economic sanctions a parent uses are not comparable to the fear of torture or death a regime uses.
Of course the reasons behind the failure of the soviet system are complex.
But in my opinion what you are alluding to is far from the most important issue, your argument can be used to conclude that any system in Russia would have failed.
I think the system as a concept is founded on inconsistencies and fallacies about history and human nature. To prove this I can point to numerous stupidities of the regime, Lenin outlawed money for a brief period, then very quickly they realized that was stupid.
The communist ideologies mistakes have been documented in thousands of works.
It has nothing to do with the underdevelopment of Russia, look at countries like Poland, Germany, Czechoslovakia, they were not like Russia, but had experienced huge capitalist growth and met all the conditions Marx put out for the success of communism.
The shortages and lack of variety were not due to anything but the economic system.
I think the main reason that the USSR as a political entity turned capitalist was due to the KGB's effort. They engineered the collapse and profited greatly. Look up the last head of the KGB and his profiteering from the export of Russia's gold.
This thing happened in Bulgaria too under the DC.
The USSR could have kept on repressing the people for much longer.
"There was a moment when on one of the congresses it was declared that the communism was almost achieved, lol. "
This is what the EU does now. One guy says "We have successfully completed the 5-year plan with x% increase and y% decrease", then mindless "Euorpean representatives" applaud themselves.
"It was Lenin who made some revisions and developed the strategy for make it possible."
True, and like anything it has been revised many times and evolved.
It is true, most of these people call themselves "Marxist-Leninist", but I didn't want to say that because people might think I was characterizing them as being totalitarian or sympathetic to the Lenin regime.
"Socialism similar to the Western European model which does not reject capitalism can functions in a democratic system. But it has nothing to do with Marxist’s economic system."
There are two points. The first is the definition of socialism. From my point of view, that is defined by Marx as a transitional state towards communism. The looser definition is more or less a welfare state with redistribution of wealth and a market economy, and that obviously exists.
The other point is that even though this may be the case (with the looser "socialism"), proponents of so called "democratic socialism" ARE calling for a move towards a Marx-like socialism.
I need to make a few corrections...
14th century (obviously)
"The party whose current members"
Bulgaria is the EU Country with the Largest Number of Women in the IT Sector
Hewlett Packard Enterprise Sold its Assets and Contracts in Bulgaria
|
OPCFW_CODE
|
Abit NF7-S Motherboard Installation Notes
Last updated: 10/22/04
This article will show you how to solve stability and other problems that may be encountered when installing the Abit NF7-S version 2 and other Nividia nForce2, AMD socket A motherboards. The Abit NF7-S Version 2 motherboard is very stable when installed with quality components and configured correctly.
The computer used for this article has an AMD Athlon XP 1900+ processor with a 266 MHz front side bus (FSB) and 512 MBytes of Crucial 333 MHz DDR memory. The basic motherboard, memory, and CPU installation is similar to the procedures used to install the Abit KX7-333R Motherboard. We installed Windows XP Pro and Service Pack 2 (SP2).
1. BIOS. Because of the stability problems we encountered, we flashed the motherboard BIOS in an attempt to fix them. It didn't. We did not revert to the BIOS version that was originally installed. The BIOS was flashed to version 25 dated 07/06/04 and released 07/28/04. Be sure to use the BIOS for your version of the motherboard. Also, the Abit NF7-S2 motherboard is not the same motherboard as the NF7-S Version 2.
2. Old Display Adapters. This motherboard will not work with old 1X/2X/3.3 volt display adapters. If you have a 2X/4X board and the computer does not boot, jumper it for 4X. Even so, there were some boot-up stability problems I thought were related to the old Diamond Stealth III S540 display adapter (S3 Savage4 Pro chipset) that was installed. A
new VisionTek Xtasy 9200 SE 128MB display adapter was installed and those problems disappeared.
3. Memory. Always use major brand-name memory (Crucial,
OCZ, Mushkin, Corsair, Kingston). Always observe anti-static precautions when installing it. The NF7-S is fussy about which DIMM sockets the memory is installed in. The CD-ROM version of the User Manual states:
- Install DDR SDRAM modules in series from DIMM3 to DIMM1 sockets.
- When installing two PC3200 modules, install them in DIMM3 and DIMM2.
You won't find this in the hard copy of the User Manual and it is rather backwards from conventional logic. It does make a difference.
The default CMOS Setup memory timing settings are too fast for the memory most people would purchase for this motherboard. If left as they are, the motherboard will likely freeze-up randomly, if it works at all. Check you memory specifications and set them accordingly. The followings settings in the Advanced Chipset Features worked for the Crucial memory we installed:
- Row-active delay 11
- RAS-to-CAS delay 3
- Row-precharge delay 3
- CAS latency time 2.5
The default CAS latency time of 2.0 will glitch most memory sold for PCs. The Row-active delay setting is a higher number than that specified for the memory we used and that of many other memory modules, but should result a faster system than the specified value.
4. Frontside Bus (FSB) and Memory Bus Synchronization. AMD Athlon processors use separate FSB and memory buses. For greater stability, both should be set to the slower of the two, processor (FSB) and memory. Very little performance is gained by setting them otherwise.
5. IDE Drivers. If you experience disk drive problems, you may have to install the Windows IDE driver instead of Nvidia IDE driver. We had problems with the Nvidia driver and are now using the Windows XP driver. The Nvidia driver setup has an option for installing the Windows driver.
Next - Other Potential Problems
|
OPCFW_CODE
|
Top 23 C Driver Projects
BlackHole is a modern macOS virtual audio driver that allows applications to pass audio to other applications with zero additional latency.Project mention: How do screen record with audio while using headphones? | reddit.com/r/mac | 2022-06-30
Use BlackHole (you only need the 2ch version): https://github.com/ExistentialAudio/BlackHole
Windows Precision Touchpad Driver Implementation for Apple MacBook / Magic TrackpadProject mention: My new custom wrist rest | reddit.com/r/MechanicalKeyboards | 2022-06-27
I use Mac Precision Trackpad.
Static code analysis for 29 languages.. Your projects are multi-language. So is SonarQube analysis. Find Bugs, Vulnerabilities, Security Hotspots, and Code Smells so you can release quality code every time. Get started analyzing your projects today for free.
Windows File System Proxy - FUSE for WindowsProject mention: WinFsp 2022+ARM64 | reddit.com/r/programming | 2022-06-25
User mode file system library for windows with FUSE WrapperProject mention: User mode file system library for windows with FUSE Wrapper | news.ycombinator.com | 2022-05-13
PostgreSQL database adapter for the Python programming languageProject mention: Engineers complaining about Docker for Mac? | reddit.com/r/docker | 2022-07-05
HackSys Extreme Vulnerable Windows DriverProject mention: BOF in Proving Grounds? | reddit.com/r/oscp | 2021-10-13
Windows drivers: https://github.com/hacksysteam/HackSysExtremeVulnerableDriver https://github.com/dhn/OSCE https://purpl3f0xsec.tech/2019/06/18/osce-prep-1.html Windows Exploitation Pathway https://github.com/epi052/OSCE-exam-practice
Driver for the SSD1306 and SH1106 based 128x64, 128x32, 64x48 pixel OLED display running on ESP8266/ESP32Project mention: OLED display code works with Arduino Uno but not NodeMCU 1.0 (ESP8266MOD) | reddit.com/r/arduino | 2022-01-02
Less time debugging, more time building. Scout APM allows you to find and fix performance issues with no hassle. Now with error monitoring and external services monitoring, Scout is a developer's best friend when it comes to application development.
Free and Open Source API and drivers for immersive technology.Project mention: Does VR run on Linux and how is the G2? | reddit.com/r/HPReverb | 2022-05-30
Synology DSM driver for Realtek RTL8152/RTL8153/RTL8156 based adaptersProject mention: Any direct fast solutions from synology ? | reddit.com/r/synology | 2022-06-30
My old 918+, with a usb 2.5GBps dongle worked just fine, with large file transfer peaking about 2.1 Gbps between servers. Slow is not the 918+ 's problem . https://github.com/bb-qq/r8152 .
Free exFAT file system implementationProject mention: How can you create a filesystem file for your OS? | reddit.com/r/osdev | 2022-03-21
There are Unix tools for making virtually every kind of disk image imaginable. And you can even put on partition tables not just FS images if you want to test with complete virtual hard disks. There are tools for ext*fs versions, ntfs, fat, fat32 and exfat as well: https://github.com/relan/exfat .
EFI FileSystem driversProject mention: Fedora considers deprecating legacy BIOS | news.ycombinator.com | 2022-04-21
EFI doesn't actually mandate FAT for the system partition. The system partition can be any filesystem that the firmware supports.
Of course, pretty much all EFI implementations only support FAT, so it's a bit of a moot point; the only one I'm aware of that supports anything else is the one on Intel Macs, which also understands HFS+.
You can find a huge selection of EFI filesystem drivers at https://efi.akeo.ie/ but they're derived from GRUB and hence GPL, so don't expect the likes of American Megatrends to be bundling these any time soon.
DSM driver for Aquantia AQC111U(5Gbps) based USB Ethernet adaptersProject mention: Newb here, brand new to SFP+ switches and have questions before buying | reddit.com/r/mikrotik | 2022-05-29
Not from Netgear (Synology's BB-QQ ethernet has issues)
Windows kernel-mode Bluetooth Profile & Filter Drivers for PS3 peripheralsProject mention: SCP Toolkit Not Working All of a Sudden | reddit.com/r/pcmasterrace | 2021-09-28
With BthPS3 they have a filed bug where Windows 11 seems to be throwing out the driver every time there is a software update https://github.com/ViGEm/BthPS3/issues/26
Linux HWMON (lmsensors) sensors driver for various ASUS Ryzen and Threadripper motherboardsProject mention: PSA: If you're experiencing weird fan behaviour on an ASUS motherboard after upgrading to Linux 5.17, blacklist the asus_wmi_sensors module | reddit.com/r/archlinux | 2022-04-02
Linux 5.17 added the Linux ASUS WMI Sensors driver which provides sensor readouts for things like fan speed on some ASUS motherboards. However, some boards like the PRIME X470-PRO have buggy firmware where subsequent calls to the interface can make fans stuck at max speed or just stop spinning altogether, leading to unpleasant noise and forced shutdowns. The problem quickly shows up if you have something like bottom running in the background, constantly querying temperature sensors. A quick fix is to blacklist the module by putting the following in /etc/modprobe.d/disable-asus-wmi.conf:
A WIP "Vulnerable by Design" kext for iOS/macOS to play & learn *OS kernel exploitation
TFT and touch pad drivers for LVGL embedded GUI libraryProject mention: GUI for software, not games, but lighter than Qt ? | reddit.com/r/cpp | 2022-04-08
LVGL. It is designed for embedded system, but can be used on virtually every platforms.
Huawei WMI laptop extras linux driver
Linux driver for VEIKK-brand digitizersProject mention: Any drawing tablet recommendations? | reddit.com/r/archlinux | 2022-01-16
I was able to make use of my Veikk S640 before in linux using a community-made driver. It worked great, but the tablet, itself, is not that durable. The tablet's surface got scraped off by the pen over a year of usage. I don't even press hard on my pen and I use light sensitivity on the pen pressure settings. It has now an official driver from Veikk, but I haven't tried it.
Realtek RTL8125 driver for ESXi 6.7Project mention: ESXi does not detect NIC despite having the correct Intel driver installed | reddit.com/r/vmware | 2021-09-08
I did find this link https://github.com/realganfan/r8125-esxi/releases to those network cards.
Updated Fusion-io iomemory VSL Linux (version 3.2.16) driver for recent kernels.Project mention: FusionIo ioDrive2 Windows install tutorial | reddit.com/r/homelab | 2022-01-13
Download drivers from a year kinda close to when the drive came out. I tried more recent versions that didn't work and settled one from Lenovo's site. Following the compatibility list from this Github page for an open-source version of the drivers for Proxmox, I picked dd_fusion-io_iomemory_sx300_gen3.5_220.127.116.118_x64_windows.exe (version 3 for my model no.)
Saitek X52/X52pro drivers & controller mapping software for LinuxProject mention: Advice on working with LibUSB | reddit.com/r/linux_programming | 2022-02-07
If you want a sample project that uses libusb, feel free to peruse my libx52 project here.
Linux ACPI and Platform Drivers for Surface Devices using the Surface Aggregator Module over Surface Serial Hub (Surface Book 2, Surface Pro 2017, Surface Laptop, and Newer)Project mention: Which Wi-Fi adapter has the best support on Linux? | reddit.com/r/linuxhardware | 2021-09-15
Other platforms, such as Microsoft's Surface devices have the Surface Aggregator Module whose role(s) haveexpanded to scopes such as handling input events, and over time, become much harder to handle on alternate operating systems such as Linux. Apple's T2's security processor, for example, imposes even more restrictions (and at a wider scope) than Microsoft's SSAM implementation.
Linux kernel FL2000DX/IT66121FN dongle DRM driver
C Driver related posts
Engineers complaining about Docker for Mac?
1 project | reddit.com/r/docker | 5 Jul 2022
Any direct fast solutions from synology ?
1 project | reddit.com/r/synology | 30 Jun 2022
How do screen record with audio while using headphones?
1 project | reddit.com/r/mac | 30 Jun 2022
Genuine Question: Why is Synology still using 1gbe RJ45?
1 project | reddit.com/r/synology | 30 Jun 2022
My new custom wrist rest
1 project | reddit.com/r/MechanicalKeyboards | 27 Jun 2022
I see you your Trackball and raise you to the Trackpad!?
1 project | reddit.com/r/audioengineering | 25 Jun 2022
Looking for MacOS M1 Audio driver to control sound level of external DAC
2 projects | reddit.com/r/MacOS | 23 Jun 2022
What are some of the best open-source Driver projects in C? This list will help you:
Are you hiring? Post a new remote job listing for free.
|
OPCFW_CODE
|
/// A protocol indicating that an activity or action supports cancellation.
///
/// ## Topics
///
/// ### Supporting Types
///
/// - ``AnyDatabaseCancellable``
public protocol DatabaseCancellable {
/// Cancel the activity.
func cancel()
}
/// A type-erasing cancellable object that executes a provided closure
/// when canceled.
///
/// An `AnyDatabaseCancellable` instance automatically calls ``cancel()``
/// when deinitialized.
public class AnyDatabaseCancellable: DatabaseCancellable {
private var _cancel: (() -> Void)?
/// Initializes the cancellable object with the given cancel-time closure.
public init(cancel: @escaping () -> Void) {
_cancel = cancel
}
/// Creates a cancellable object that forwards cancellation to `base`.
public convenience init(_ base: some DatabaseCancellable) {
var cancellable = Optional.some(base)
self.init {
cancellable?.cancel()
cancellable = nil // Release memory
}
}
deinit {
_cancel?()
}
public func cancel() {
// Don't prevent multiple concurrent calls to _cancel, because it is
// pointless. But release memory!
_cancel?()
_cancel = nil
}
}
|
STACK_EDU
|
Call for Proposals
Best Practices for Presenters
Code of Conduct
Event Photography Policy
Tips and Best Practices for Professional Virtual Presentations
- Usually up to five people may be presenting in a session at once. Some sessions may vary in size and allow more.
- A screen being shared counts as a person for the limit of people participating.
- You may request to join the presentation group by using the Request button. That will alert moderators in the session.
- Sessions will be moderated by volunteers who can add a person to the presentation group and will do so as a scheduled event begins.
- Moderators can also add audience members to the presentation group for discussions but to avoid confusion with having to leave presentation mode moderators should usually read questions to presenters if they need help.
- Presenters should turn off their camera if not in use to save on bandwidth and CPU. Those on low power devices will thank you.
- When you are done participating, use the Stop Participating button.
- If you choose to turn your camera on, make sure your background is not distracting (roommates, posters, pets).
- Reduce glare from lights and windows.
- Use headphones, preferably a headset with a mic, to cut out background noise.
- Consider using an extra monitor to view your audience and presentation on different screens.
- Have a physical clock to help keep track of time, since your computer clock may be hidden when in presentation mode.
- Consider standing up if it is more natural for you to present like this, though avoid bouncing around as it can become distracting.
- Focus on the intonation of your voice rather than hand gestures.
- Presenters should strive for presentations that are visually accessible to attendees in an online environment. See “Formatting the Presentation” below, as well as pages from the Digital Library Foundation3 and Code4Lib4.
- For supplemental materials, when possible, do not use PDF. Make materials available in their original format (Word, txt, Excel, etc.) to ensure their accessibility.
- Share your slides – note that it’s easier to make your original presentation slides accessible than it is to produce an accessible PDF of your slides.
- Be inclusive in your presentation:
- If your presentation is interactive, tell the audience at the start what you will be doing (“I will ask you to write down some thoughts multiple times during the presentation, so please get a pen and paper now,” or “I’ll give you 1 minute to fill out this poll”), and actually give them enough time to complete the work.
- Please speak slowly and clearly, like a newscaster, to make it easier for the captioner (and audience) to understand you.
- If you are answering questions from the chat, read the question out loud and credit the person asking it (unless you are taking anonymous questions).
- If you are using acronyms or jargon, please state the full name, followed by the acronym spelling (for example, “American Broadcasting Company, ABC”) or definition the first time you reference it. Describe images to the audience as you present, and give a brief description of a video before you play it if it has no captions.
- Instead of asking your audience to read a slide, read it aloud to them instead (the audience may have a very small screen, or may not be able to see the slides).
- Read aloud any URLs to the audience. Use a URL shortener like tinyurl or bit.ly to make it easier.
- If possible, please email the Conference Committee by Friday, May 21st at firstname.lastname@example.org with any jargon, acronyms, or technical terms that you intend to use in your presentation. The Committee will be sharing these terms with the captioning team.
Formatting the Presentation5
It’s important that your presentation is easy to read and easy to follow. Keep these tips in mind when you design the background and format of your slides:
- Use a simple, solid colored background throughout the presentation
- Make sure that the font color contrasts with the background
- Limit how many colors you use
- Use a standard text like Times New Roman, Arial or Calibri
- Capitalize words and phrases only for emphasis
- Include keywords on your slides
- Stay away from long, informational sentences
Engage your Audience6
If it’s live, open your presentation with a question or poll to interact with the audience.
- Find your voice: Use an energetic, active voice to grab and maintain attention
- Avoid using filler words such as “like” and “um”
- Use an accessory microphone for clearer sound
- Consider your body language:
- Set a neutral position; sit up straight with feet shoulder width apart and weight evenly distributed
- Use hand gestures to highlight your words
- Maintain eye contact to convey sincerity, place a sticky note near the camera as a reminder
- During interactive elements, tell the audience at the start what you will be doing (“I will ask you to write down some thoughts multiple times during the presentation, so please get a pen and paper now,” or “I’ll give you 1 minute to fill out this poll”), and actually give them enough time to complete the work7
Evergreen Event Code of Conduct
1 Adapted from: https://mitcommlab.mit.edu/nse/commkit/virtual-presentations/
2 Adapted from https://2021.code4lib.org/general-info/accessibility#presenters
6 Adapted from https://www.unmc.edu/facdev/_documents/presenter-forms/Virtual_Presentation_Tips.pdf
|
OPCFW_CODE
|
February 2nd 2015
Recently a user raised Issue #661 on GitHub where unfortunately they lost a total of 1.43033675 bitcoin. This blog article provides details of the forensics we performed to find out what happened and our recommendations for action to try to prevent this happening again.
The wallet was created in April 2014 in MultiBit Classic. On first creation a MultiBit Classic wallet contains a single private key/address. Any transactions created use this single address as change. When additional private keys are created or imported the second address in the wallet is always used for change. This rule was introduced after a user imported a paper wallet and, not understanding the nature of Bitcoin change outputs, subsequently manually deleted their MultiBit Classic installation to "leave no trace".
Examining the wallet structure closely it has a total of 580 private keys. Almost all of these have private keys that are random numbers - this is normal. However the second private key in the wallet corresponds to the private key (hex) of
0000000000000000000000000000000000000000000000000000000000000000. The third private key corresponds to the private key (hex) of
0000000000000000000000000000000000000000000000000000000000000001. These are obviously not random numbers.
In the MultiBit log we record when the user shows the "Tools | Import private key" screen but we do not record the actual private keys imported for privacy reasons. In our opinion by far the most likely way these specific private key values were inserted into the wallet is that the user was experimenting with the private key import capability in MultiBit Classic.
This was in April 2014. Unfortunately having a change address with a private key of 0 makes any bitcoin controlled by that address unspendable. This is due to a quirk in the Bitcoin protocol. The address for the private key is arithmetically correct but no acceptable signature can be created to transfer the bitcoin (see this BitcoinTalk comment). From April 2014 to Jan 2015 the user used this wallet to receive bitcoin and then almost immediately spend the total amount (less the miner's fee) out. For these transactions there was no change output and everything worked correctly.
From 15 Jan 2015 to 26 Jan 2015 the user made 8 transactions where change outputs were created and it is these outputs that are unspendable. They total 1.43033675 bitcoin in value.
Bitcoin is an experimental protocol so it is important to improve things wherever possible to make the system more robust and safer to use. We recommend the following actions are taken:
0000000000000000000000000000000000000000000000000000000000000000cannot be added to the wallet should a user attempt to import this value.
At MultiBit we work hard to provide users with a safe, secure and open source Bitcoin wallet. We hate it when users run into problems like this and we believe that these recommendations will improve the overall resilience of Bitcoin wallets for everyone.
Here are some related articles:
|
OPCFW_CODE
|
I’ve tried to see what my application looks like when I design the materials, and I want to use the new Lib cards.
My problem is that it’s causing this error in my gradle file, and I need to fix it.
Error: compileSdkVersion android-L requires compilation with JDK 7
I downloaded and installed jdk-7u60-macosx-x64.dmgand. java version in Terminal shows me that version 1.7 is installed:
java version 1.7.0_60
Java(TM) SE Runtime Environment (build 1.7.0_60-b19)
Java HotSpot(TM) 64-Bit Server VM (build 24.60-b09, mixed mode)
Я… Я… What kind of java…
give it to me:
/usr/bin/java -> /System/Library/Frameworks/JavaVM.framework/Versions/current/Commands/java
…/actually/… …doesn’t have a home. I have… I have… I found a house here:
And set the path to the SDK storage settings in Android Studio under JDK Location. But it’s not working. Looks like he still can’t find JDK 7.
I use Mac OSX 10.9.3 and Android Studio (beta) 0.8.1.
How can this problem be solved?
Decision No 1:
Create a folder:
in JDK settings have solved my problem I had the same problem when I started working. HIH
Decision No 2:
The answer @megapoff is correct. But I’m having a little trouble fixing it. So here’s a detailed, step-by-step decision.
- Download the DMG JDK-7 file here.
- Click on DMG and follow the instructions. It will install and configure the JDK-7 on a Mac.
- Now go to File-> Project structure -> SDK location in the Android studio.
- Click on the JDK location on the preview and go to /->Library>Java->JavaVirtualMachines->jdk1.7.0_60.jdk->Content ->Home.
Note :- This is not /system/library, but /library.
- Click on Apply and OK …….
Bingo ….. Reconstruction of the project at …….
Decision No 3:
The other answers are very precise, but I would like to be more concise and clear. To prevent others from returning to this page unnecessarily.
Important* The old method is /System/Library/Java… and the new path is /Library/Java. (Not in the system catalogue)
Replace the old method: /System/Library/Java/JavaVirtual Computers/jdk1.6.0_0.jdk/ Content/Domains
Define your new path: /Library/Java/JavaVirtual Machines/jdk1.7.0_79.jdk/Content/Home
Decision No 4:
I haven’t moved to the android’s studio yet. I’ve used it for some tests and I like it a lot. A change has simply not been possible until now. I had a similar problem with Eclipse, and this is of course another solution, but when I look at one of my test projects, it seems that this is the way to go:
Open your project and go to File->Settings.
In the Project Settings section, expand the compiler and go to the Java Compiler option. You want to use javac and set the bytecode version of the project to 1.7.
I hope it works.
Decision No 5:
For jdk-7u79-macosx-x64.dmg just install the directory at
They’re both different.
Decision No 6:
Instead of searching for the file in :
/System/ Libraries/ Java/ Java Virtual Machines/
We should check it out:
android studio java_home mac,android studio embedded jdk,android studio java 11,jdk location mac,cause: invalid type code: 19,mac install jdk,android studio download,android studio online,android studio features,update android studio,android studio tutorial,android studio emulator,flutter download,configure android sdk,how to run a project in android studio,sdk emulator directory is missing,android sdk download,no android sdk found,android sdk location,sdkmanager,how to check android sdk version command line,ram requirement for android sdk,jdk for android studio download,what is jdk in android,does android studio come with jdk,need jdk for android studio,what is the android logging system,sdk does not contain any platforms,unable to run mksdcard.exe sdk tool,download sdk for android studio,how to install android studio on windows 10,android sdk jdk 14,android studio built in java,is jdk required for android studio,android studio setup wizard restart,how to open android studio setup wizard,sdk components setup,install flutter plugin android studio,android studio sdk tools directory is missing,android studio jdk,android studio jdk version,android studio for mac,mac use android studio jdk,android studio and gradle are using different locations for the jdk.,how to install android studio on mac,android studio and gradle are using different locations for the jdk mac
|
OPCFW_CODE
|
Swfupload suddenly stopped working in IE (with flash 10 and windows 7)
In our backend-application we make
extensive use of swfupload. It has always worked perfectly and we're
really appreciating the efforts of the swfupload team. Unfortunately,
we keep getting more and more complaints from our customers, saying
the uploader doesn't work anymore in their browser (Internet
Explorer). The 'select files' button appears as an empty square, with
a red cross.
I've spent hours trying to fix this, because also on my computer it
doesn't work anymore. I'm using Windows 7 + Internet Explorer 8 +
Flash 10.2. I've disabled all of my security settings, but no effect.
So I thought, it's because of a wrong implementation, but if I go to
http://demo.swfupload.org/v220/simpledemo/, I get the same result.
Also, the demo's in the latest beta3 package seem to have the same
problem.
I hear and see a lot of people complaining about this, but can't seem
to find a proper solution or any response from the swfupload team. Is
swfupload dead? Should I choose another tool? Any other people with
the same problem?
I also placed this question on the swfupload forums, but I'm not really counting on answers over there.. http://groups.google.com/group/swfupload/browse_thread/thread/344a9079330dd805
This is the response I got from one of the swfupload developers, thought I'd share it with other people searching for a solution (there is none):
"We have users with the same problem (on Windows 7, IE 8 32-bit, Flash
10.1 as I recall it) and we have tried many different suggestions found in various forums but without success. Eventually we had to recommend the users to switch to Firefox."
So, to me, it looks like swfupload is a dead end..
I had the same issue(win7 IE 64bit) and simple solved the issue by upgrading my flash. Maybe you need: http://swfupload.org/forum/generaldiscussion/2140
Don't know if this is solved yet, but I've had tons of complaints from various customers with various setups about swfupload. I actually think using Flash for uploading files was a bad idea anyway! So I'd suggest people to look for decent alternatives. I personally find qqUploader a good one, using solely javascript, very friendly to use.
Yeah there are millions of different options, especially with HTML5. I had a quick look and http://www.uploadify.com/ HTML5 version has the best reviews.
|
STACK_EXCHANGE
|
Most recently I've been adding the code for thrust and as a result once you start a game, your ship appears and you can move around.
One nuance of porting is the distinction between zero-page accesses and memory accesses. Looking at the original 6502 source listing, there's no indication which is which. On the 6809, all labels for direct page variables are .EQU statements, and the operand prefix is '*'. If you forget the asterisk, the code assembles but doesn't work as planned. Somewhat fortuitously though, the way I have the memory map configured, you'll get stray pixels on the top few lines of the video, and that's easily trapped in the MAME debugger since - atm - Asteroids never renders there (see below).
The other issue I touched on last post is the vertical resolution. After some experimentation in the C port, which is rendering 'vectors' in 1024x1024 coordinate space, I've confirmed that Asteroids uses approximately 788 "lines" of the display space, effectively leaving the top and bottom 118 lines (or 10% each) blank. When you reduce the resolution to 192 pixels, that 20% blank space is quite significant.
Unfortunately 192 doesn't quite divide into 788 nicely, and 192*4 (768) crops the score and copyright messages. The Coco3, however, conveniently has a 200-line mode, which would greatly simplify the scaling (right-shift by 2) and allow use of most of the display. The only issue is that the graphics were, IIUC, originally designed by Norbet for a 192-line display. I'll have to experiment to see if and how they could be adapted for a higher resolution.
But back to porting code for now...
UPDATE: Tonight I ported the code that handles all the collisions. I need to update my disassembly comments in a few areas here, as the routines actually handle all collisions between all objects, whilst my comments suggest it's only the collisions between shots and other objects.
There's a decent chunk of code involved, so not surprisingly it doesn't yet work. Worse yet, I've actually broken what was working before, in the process of 'fixing' a few bugs that I discovered porting the new code. It's too easy to forget that the 6502 X,Y registers are 8-bit vs 16-bit on the 6809...
Looking purely at object (binary) code size, the porting is now roughly 50% complete.
|
OPCFW_CODE
|
That means running out of article ideas is out of the question. In the computing world, we talk about data in the sense that www.swagbucks.com register is any encoded information. Moreover, snakes are different from pythons, snakes uses venom to kill its prey while pythons utilized constriction on their hapless victims. Now, all you have left to do is to set an invitation date, a deadline date, tryon surveys some reminder dates (or an interval, if you prefer), and invite sufveys right people to come take your survey. It was impossible for Margaret to help all five children. If not, then set a reminder to ensure that you and other office employees run the scan on their computers periodically.
There are some well known companies like Inuvo connects advertisers and publishers survets affiliate marketing platforms based on pay per performance that gets advertisers more quality leads and clicks. Raptor is a responsive, flexible and adaptable free web hosting website template that will help you launch your web site quickly and efficiently. This type of economics can assist as soon as present is necessitate for money and you would not contain to go during a long process or set up your residence or auto for security. It is a wooden structure i buy a prepaid visa card a tryon surveys front tryoj. Posted by Unknown at 10:10 PM Troyn ThisBlogThis. Recent cries of catastrophic aurveys change depict human beings as threatening forces who tryon surveys up the melting of glaciers by adding more carbon dioxide (CO2) to Earths atmosphere.
| All three sites listed above are great alternatives to Points for Surveys that pay with cash. They are property brokers or property speculators who help you to dispose of the typical bugs connected with a quick house bargain. One action taken by corporations surveyss address the labor tryon surveys is outsourcing the operations. Panama is now seen as the hottest real estate market on the globe. Gathering meaningful insights begin with summarizing the raw responses. They did, very good ones in fact. The fast paced 21st century has increased the demand for transcription services. Use tryon surveys service like PhishLabs to monitor the dark web for stolen data or exploits against your infrastructure. Thank you for visiting. | To apply eyeliner correctly, you need to maker sure surbeys have applied all other make-up first.
They offer flat 6p calls to mobiles and landlines in Tryon surveys, and Toggles mobile rate was fryon higher. Not sure whether you should be making a survey or a questionnaire. Both these printable invitations tryoj designed to be A5 in size, so make sure you are using A5 paper, or set your printing options accordingly so it can be trimmed to size after it has been printed. If you are an impatient survveys, you can always look on the internet. It will take time to start earning an income. Youre more likely to lure in potential customers if learn more here can effectively communicate this information.
|
OPCFW_CODE
|
What Are Abscesses in Reptiles?
An abscess is considered a swollen area or pocket of tissue located on the body that is full of white blood cells. Abscesses typically form due to an infection. In response to the infection, white bloods will fill the pocket of tissue to combat the infection. In most species, once a pus is popped or drained, the white blood cells will reabsorb back into the body.
Reptiles are a bit different. They lack the enzymes that would normally allow the body to resorb the contents of an abscess, leading to the development of firm, cheese-like pus inside a capsule in the body tissue.
Abscesses can be located on or just under the skin, or internally. They form for many reasons but all develop due to introduction of bacteria into the body.
Symptoms of Internal Abscesses in Reptiles
Since abscesses can form almost anywhere in or on the body, symptoms can vary based on location.
For external abscesses, you may see:
A swelling or mass
Trauma to the skin
Anorexia (not eating)
Common areas you may notice swellings are on limbs, the nose, vent, or ears.
Symptoms of an internal abscess can be less obvious but may include:
Abnormal breathing (especially open-mouthed breathing)
Abnormal neurological symptoms such as unsteady walking or seizures
One swollen/protruding eye
Causes of Abscesses in Reptiles
While abscesses are a common problem for all reptiles, different species are more prone to abscesses in some spots than others.
In general, abscesses form on the nose due to stressed animals that repeatedly attempt to escape their enclosure.
Abscesses on the head, tail, or limbs are often due to trauma from bites (may be from cage-mates or live prey), falls, or attacks from other animals in the household.
Dental disease can lead to the formation of abscesses in the mouth.
Reptiles with a suppressed immune system are more likely to develop abscesses. Stress may also predispose a reptile to developing an abscess and is the most likely cause for an internal abscess. Common stressors for reptiles include:
Incorrect enclosure temperatures
Inappropriate humidity levels
Inadequate or unsafe housing
How Veterinarians Diagnose Abscesses in Reptiles
In many cases, a physical examination is enough to diagnose external abscesses. Confirmatory testing via a fine needle aspirate can be done, in which the veterinarian inserts a needle into the abscess and looks at the collected cells under a microscope to identify white blood cells and bacteria. Identification of the bacteria present can be performed using special stains and culturing of the sample.
Internal abscesses may require imaging such as X-rays or ultrasounds for diagnosis. Blood work can also help diagnose not just a suspected internal abscess, but also identify which organ the abscess is affecting or located within.
Biopsies of any tissues suspected of abscessation are also common.
Treatment of Abscesses in Reptiles
Since abscesses do not resolve on their own in reptiles and cannot be resorbed, the first step of treatment is to surgically remove the abscess in most cases. This may include removing it entirely as if it were a mass, or cutting it open to empty any pus and close the skin afterward. Both versions of this procedure require sedation or anesthesia. Reptiles are treated with antibiotics so they can heal fully.
Other aspects of treatment may include:
Providing fluids for hydration
Warming the patient if hypothermic (too cold)
Correcting any care issues that may have triggered the abscess
Internal abscesses almost always rely on antibiotic therapy for treatment, except for abscesses inside the mouth, in which case treating any dental disease will be the first step. While surgical removal is also an option, in most cases this isn’t possible.
Treatment of abscesses requires oral or injectable antibiotics such as enrofloxacin, amikacin, or ceftazidime—it is not recommended to try to treat your reptile at home.
Recovery and Management of Abscesses in Reptiles
An abscess is essentially a localized infection. For abscesses that are small, treated early, and do not involve bone, the prognosis for recovery is good.
If a localized infection spreads into the blood and creates a systemic infection (affecting the whole body), sepsis occurs and can be fatal. Sepsis is also more likely if there is bone involvement, multiple abscesses, or the reptile has other additional health conditions.
After completing surgery for an abscess, it’s essential to complete the full course of antibiotics prescribed by your veterinarian as failure to do so can lead to recurrence of the abscess. You will want to keep the incised area clean, dry, and monitor for signs of the incision opening, draining, swelling, or becoming painful.
If your reptile has cage mates, set up their own recovery enclosure until they are fully healed. Careful attention should be taken to make sure there is nothing in their enclosure that can catch on their healing incision. Look for any protruding edges and do not provide enrichment or decorative items in their enclosure until they are healed.
If your reptile is taking daily oral antibiotics, don’t worry about cleaning their incision unless it becomes covered in something, in which case you can spot clean the area with warm water and dish soap, gently wiping away any accumulations. Be sure to dry the area when you are done. Disturb the tissue as little as possible if cleaning is necessary. Always reach out to your vet if you have any questions during your reptile’s recovery.
Featured Image: iStock.com/Sami Sert
Help us make PetMD better
Was this article helpful?
|
OPCFW_CODE
|
package com.drahnea.fillingtool.model;
import java.util.LinkedList;
import java.util.List;
/**
*
* @since Jan 26, 2015
* @author sdrahnea
*/
public class Element
{
private String id;
private List<Element> parents = new LinkedList<>();
private List<Element> childs = new LinkedList<>();
private Table table;
private List<Table> data = new LinkedList<>();
private boolean populated;
private boolean inserted;
private List<Relation> relations = new LinkedList<>();
private List<String> childsId = new LinkedList<>();
/**
*
* @param parent
* @param child
* @param table
* @param populated
* @param inserted
*/
public Element(String id, Table table, boolean populated, boolean inserted)
{
this.id = id;
this.table = table;
this.populated = populated;
this.inserted = inserted;
}
public List<Element> getParents()
{
return parents;
}
public void setParents(List<Element> parents)
{
this.parents = parents;
}
public List<Element> getChilds()
{
return childs;
}
public void setChilds(List<Element> childs)
{
this.childs = childs;
}
public Table getTable()
{
return table;
}
public void setTable(Table table)
{
this.table = table;
}
public boolean isPopulated()
{
return populated;
}
public void setPopulated(boolean populated)
{
this.populated = populated;
}
public boolean isInserted()
{
return inserted;
}
public void setInserted(boolean inserted)
{
this.inserted = inserted;
}
public void addParent(Element parent)
{
if (parent != null)
{
if (!isElementExist(parent, parents))
{
this.parents.add(parent);
}
}
}
public void addChild(Element child)
{
if (child != null)
{
if (!isElementExist(child, this.childs))
{
this.childs.add(child);
}
}
}
private boolean isElementExist(Element element, List<Element> elements)
{
for (Element e : elements)
{
if (e.getId().equalsIgnoreCase(element.getId()))
{
return true;
}
}
return false;
}
public String getId()
{
return id;
}
public void setId(String id)
{
this.id = id;
}
public void addData(Table table)
{
this.data.add(table);
}
public List<Table> getData()
{
if (this.childs.isEmpty() && this.data.isEmpty())
{
this.data.add(this.table);
}
return this.data;
}
public void setData(List<Table> data)
{
this.data = data;
}
public List<Relation> getRelations()
{
return relations;
}
public void setRelations(List<Relation> relations)
{
this.relations = relations;
}
public void addRelation(Relation relation)
{
this.relations.add(relation);
}
public void fillElementColumn(Table table, Column column, String value)
{
for (int t = 0; t < this.getData().size(); t++)
{
Table dataTable = this.getData().get(t);
if (dataTable.getName().equalsIgnoreCase(table.getName()))
{
for (int c = 0; c < dataTable.getColumns().size(); c++)
{
Column dataColumn = dataTable.getColumns().get(c);
if (dataColumn.getName().equalsIgnoreCase(column.getName()))
{
dataColumn.setValue(value);
dataTable.getColumns().set(c, dataColumn);
}
}
this.getData().set(t, dataTable);
}
}
}
public String getColumnValue(Table table, Column column)
{
for (Table t : this.getData())
{
for (Column c : t.getColumns())
{
if (t.getName().equalsIgnoreCase(table.getName())
&& c.getName().equalsIgnoreCase(column.getName()))
{
return c.getValue();
}
}
}
return null;
}
public void addChildsId(String childId)
{
if (!existChildId(childId))
{
this.childsId.add(childId);
}
}
public boolean existChildId(String id)
{
for (String childId : this.childsId)
{
if (id.equalsIgnoreCase(childId))
{
return true;
}
}
return false;
}
public List<String> getChildsId()
{
return childsId;
}
public void setChildsId(List<String> childsId)
{
this.childsId = childsId;
}
}
|
STACK_EDU
|
The Open Document Format for Office Applications, commonly known as OpenDocument, was based on OpenOffice.org XML, as used in OpenOffice.org 1, and was standardised by the Organization for the Advancement of Structured Information Standards (OASIS) consortium.
The first to initiate the standardisation of what became the OpenDocument standard was the DKUUG standardisation committee on its meeting the 28th August 2001. The first official OASIS meeting to discuss the standard was December 16, 2002; OASIS approved OpenDocument as an OASIS standard on May 1, 2005.
The group decided to build on an earlier version of the OpenOffice.org XML format, since this was already an XML format with most of the desired properties, and had been in use since 2000 as the program's primary storage format. Note, however, that OpenDocument is not the same as the older OpenOffice.org XML format.
According to Gary Edwards, a member of the OpenDocument TC, the specification was developed in two phases. Phase one (which lasted from November 2002 through March 2004), had the goal of ensuring that the OpenDocument format could capture all the data from a vast array of older legacy systems. Phase Two focused on Open Internet based collaboration.
OASIS is one of the organizations which has been granted the right to propose standards directly to an ISO SC for "Fast-Track Processing". This process is specifically designed to allow an existing standard from any source be submitted without modification directly for vote as a 'Draft International Standard (DIS) (or Draft Amendment (DAM)). Accordingly, OASIS submitted the OpenDocument standard to JTC 1/SC 34 Document description and processing languages a joint technical committee of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) for approval as an international ISO/IEC standard. It was accepted as ISO/IEC DIS 26300, Open Document Format for Office Applications (OpenDocument) v1.0 draft International Standard (DIS), and it was published November 30, 2006 as ISO/IEC 26300:2006 Information technology -- Open Document Format for Office Applications (OpenDocument) v1.0.
Gary Edwards, a member of the OpenDocument TC, says that after ISO standardization, "there is no doubt in my mind that OpenDocument is heading to the W3C for ratification as the successor to HTML and XHTML.". The W3C has not made any public statements supporting or denying this statement, however.
Since Open Document Format for Office Applications (OpenDocument) v1.0 was accepted as an ISO/IEC standard, OASIS have updated their standard to v1.1 in 2007. This update includes additional features to address accessibility concerns. It was approved as an OASIS Standard on 2007-02-01 following a call for vote issued on 2007-01-16. The public announcement was made on 2007-02-13. This version was not initially submitted to ISO/IEC, because it is considered to be a minor update to ODF 1.0 only, and OASIS were working already on ODF 1.2 at the time ODF 1.1 was approved. However, it was some years later submitted to ISO/IEC (as of March 2011 it was in "enquiry stage" as Draft Amendment 1 - ISO/IEC 26300:2006/DAM 1) and published in March 2012 as ISO/IEC 26300:2006/Amd 1:2012 - Open Document Format for Office Applications (OpenDocument) v1.1.
OpenDocument v1.2 includes additional accessibility features, RDF-based metadata, a spreadsheet formula specification based on OpenFormula, support for digital signatures and some features suggested by the public. OpenDocument 1.2 consists of three parts: Part 1: OpenDocument Schema, Part 2: Recalculated Formula (OpenFormula) Format and Part 3: Packages. It was approved as an OASIS Committee Specification on 17 March 2011 and as an OASIS Standard on 29 September 2011.
In October 2011, the OASIS ODF Technical Committee expected to "start the process of submitting ODF 1.2 to ISO/IEC JTC 1 soon". In May 2012, the ISO/IEC JTC 1/SC 34/WG 6 members reported that "after some delay, the process of preparing ODF 1.2 for submission to JTC 1 for PAS transposition is now in progress". In October 2013, after a one-month review period for OASIS members, the OASIS Open Document Format for Office Applications (OpenDocument) Technical Committee requested that OASIS submit ODF 1.2 to the ISO/IEC Joint Technical Committee 1 (JTC1) for approval as a proposed International Standard under JTC1's "Publicly Available Specification" (PAS) transposition procedure. This submission happened in late March 2014. As of 3 April 2014, ODF 1.2 had reached the enquiry stage of ISO's ratification process. As a DIS, it has received unanimous approval by the national bodies in September 2014, as well as a number of comments that need to be resolved. It was published as an ISO/IEC international standard on 17 June 2015.
OpenDocument version history
|OpenDocument version||OASIS Standard approved on||ISO/IEC Standard published on|
The standardization process included the vendors of office suites or related document systems, including (in alphabetical order):
- Adobe (Framemaker, Distiller)
- Corel (WordPerfect)
- IBM (Lotus 1-2-3, Workplace)
- KDE (Calligra, former KOffice)
- Sun Microsystems (StarOffice/OpenOffice.org)
Document-using organizations who initiated or were involved in the standardization process included (alphabetically):
- Intel (they are developing sample documents as a test suite) (Bastian, 2005)
- National Archives of Australia
- New York State Office of the Attorney General
- Novell (Berlind, October 25, 2005)
- Society of Biblical Literature
As well as having formal members, draft versions of the specification were released to the public and subject to worldwide review. External comments were then adjudicated publicly by the committee.
- "OASIS Open Document Format for Office Applications (OpenDocument) TC". Organization for the Advancement of Structured Information Standards.
- "Meeting agenda for DKUUG STD 2001-08-28 - item 5.6" (PDF). Retrieved 13 March 2015.
- Christian Einfeldt. "Gary Edwards: OpenOffice.org 2.0 leaping over legacy lockdown with clean XML". MadPenguin. Archived from the original on February 22, 2006.
- "Open Document Format for Office Applications (OpenDocument) v1.0". International Organisation for Standardisation. 2006-11-30. Retrieved 2006-12-05.
- "OpenDocument 1.1 Specifications". OASIS. 2006. Retrieved 2006-10-31.
- "Approval of OpenDocument v1.1 as OASIS Standard". OASIS. Retrieved 2007-02-06.
- "Members Approve OpenDocument Version 1.1 as OASIS Standard". OASIS. Retrieved 2007-02-15.
- ISO/IEC 26300:2006/Amd 1:2012 - Open Document Format for Office Applications (OpenDocument) v1.1, 2012-03-08, retrieved 2012-04-12
- Clarke, Gavin (3 October 2011). "Open Document Format updated to fix spreadsheets". The Register. Retrieved 18 April 2012.
- OASIS office message: Ballot for CS approval of ODF Version 1.2 has passed
- Members Approve OpenDocument Format (ODF) Version 1.2 as OASIS Standard, 2011-10-05, retrieved 2012-04-12
- OASIS Open Document Format for Office Applications (OpenDocument) TC, retrieved 12 April 2012
- Minutes of teleconference meeting of ISO/IEC JTC 1/SC 34/WG 6, 2012-05-23 (PDF), retrieved 2012-10-21,
Patrick Durusau reported that, after some delay, the process of preparing ODF 1.2 for submission to JTC 1 for PAS transposition is now in progress. It is not yet know when the submission will be ready ...
- Member Review of Proposed Submission of ODF v1.2 to ISO/IEC JTC1, retrieved 2014-01-31
- Proposed Submission of ODF v1.2 to ISO/IEC JTC 1, 2013-10-19, retrieved 2013-12-24
- "Minutes of ISO/IEC JTC 1/SC 34/WG 6 teleconference meeting, 2014-04-16" (PDF). 2014-04-24. Retrieved 2014-10-13.
- "ISO/IEC JTC 1/SC 34/WG 6 N 103 Minutes of teleconference meeting of ISO/IEC JTC 1/SC 34/WG 6 2014-09-24, 23:00-00:00 UTC" (PDF). 2014-09-25. Retrieved 2014-10-13.
- "ISO/IEC 26300-1 - Information technology - Open Document Format for Office Applications (OpenDocument) v1.2 - Part 1: OpenDocument Schema". ISO. Retrieved 2015-06-02.
- "ISO/IEC 26300-2 - Information technology - Open Document Format for Office Applications (OpenDocument) v1.2 - Part 2: Recalculated Formula (OpenFormula) Format". ISO. Retrieved 2015-06-02.
- "ISO/IEC 26300-3 - Information technology - Open Document Format for Office Applications (OpenDocument) v1.2 - Part 3: Packages". ISO. Retrieved 2015-06-02.
|
OPCFW_CODE
|
using System;
namespace Vending_Machine_2
{
public class Wallet
{
public int amount;
public Wallet()
{
amount = 0;
}
public void InsertAmount(int InsertedAmount)
{
amount += InsertedAmount;
}
public void WithdrawAmount(int WithdrawedAmount)
{
amount -= WithdrawedAmount;
}
public void CheckBalance()
{
if (amount == 0)
{
Console.WriteLine("Your balance is 0. Time to get to work sonny.");
}
else
{
Console.WriteLine($"Your balance is {amount}. Time to put on them spending pants??");
}
}
}
}
|
STACK_EDU
|
Enhancements, Fixes, and Known Issues 06/22/2022
The following enhancements, fixes, and known issues are found in this release, dated 06/22/2022:
- Dashboard behavior and design were updated.
- New filters were added to the Jobs page.
- Plans were renamed to Policies in Elastio Tenant.
- “Repair” functionality was added to Sources page of the Tenant. This option would allow to recover from some Cloud Connector failure modes.
- Jobs filters would not match the ones on Assets page. This was fixed. They now include accounts, kinds, instances, regions, etc.
- Asset data was exposed on the Jobs page to expand jobs related information.
- Links to documentation were added to the Dashboard page in Tenant.
- Incorrect protected items names were observed in the Windows block storage
iscanreports on the Reports page. This was fixed.
- “Policy name” field validation was introduced to the Elastio Tenant.
- A warning that unlinking a Source is not removing Elastio from AWS was added as a pop-up to the Sources page.
- Input field on Link Source flow would accept “-“ symbol and produce an error. This was fixed.
- Allowed symbols suggestion for the Password field would now be shown on the Sign-up page.
- Cloud Connector installation job was extended with subnet selection.
- The ability to list VPC’s subnets was added to the “Link Source” flow.
- Subnet selected upon Vault creation is now stored in Vault data.
- CloudConnector and Vault repair options were added to the Elastio Tenant.
- Elastio IAM policies did not allow the lease to be released. This was fixed.
- Malware scans are now supported on Windows.
elastio iscanwill now work on Linux arm64.
- CLI packages for Ubuntu 21 and 22 are now being published and are available to install.
- An option to restore to a specific availability zone was added to agentless restore commands.
elastio ec2 restorewill now allow to override VPC.
- The default behavior to rely on network config at backup time for restore was modified to be smarter and detect cases when the config is unavailable.
- Default policies were missing in a newly created Tenant. This was fixed.
- Duplicates of assets would appear in some cases on Assets page after Cloud Connector activation. This was fixed.
- Email address validation was added to ensure that email addresses are always lower case.
- Default policies removal from the Tenant was previously possible under some circumstances. This was fixed.
- Removing custom role would fail with Internal error. This was fixed.
- Incorrect range number would occasionally appear in pagination after changing the number of rows per page. This was fixed.
- There were no items on a page with pagination if the amount of total items was higher than the chosen range. This was fixed.
- Windows EC2 backup would fail if AWS snapshot creation took a long time. This was fixed.
- An error message for missing artifacts was improved in updating Cloud Connector operation.
- Vault upgrade would fail with: “Error deleting Batch Compute Environment”. This was fixed.
iscanwas unable to handle file system corruption properly. This was fixed.
- Stream restore asked for overwrite even when a file to restore did not previously exist. This was fixed.
- Block backups would occasionally fail for dismounted devices. This was fixed.
- Block backup would fail with “Snapshot file is too large” error. This was fixed.
- Device was still present in the system after
elastio umountoperation on some occasions. This was fixed.
- Block restore and mount would fail under certain circumstances on Windows with error “The following devices … were not found in recovery point”. This was fixed.
|
OPCFW_CODE
|
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool.
The researchers concentrated on posts relevant to Java security. complicated and poorly documented, and error reports from runtime systems confusing. “In.
Error Installing Windows 7 Blue Screen DON’T MISS: How to download and install Windows 10 right now But there’s one Windows 10 error that’ll leave you completely stunned: The Something happened error message bug. The image above, which went viral on Twitter, says. I am, of course, referring to the infamous Windows “blue screen of death” (BSOD) error message. Even for
Floating-Point literal For Floating-point data types, we can specify literals in only decimal. System.out.println(s); System.out.println(s1); // It give compile time error } } // Java program to illustrate the application of boolean literals.
This tutorial explains how Java’s System.in, System.out and System.err streams work, which enable you to read and write data from and to the console.
OR ACL E D AT A SH E ET Oracle Java SE Advanced Optimizing Management, Minimizing Risk Software platform maintenance and patching is one of the greatest
Softwrap File Error 3 Game Maker How To Fix Game Maker Softwrap File Error 3? – What are the different types of errors? Game Maker Softwrap File Error 3 may be caused by windows system files damage. Bluescreens are typically caused by unsuspected software errors in device drivers. SEE:Photos: 3D printing with the Ultimaker 2 Specifications: Price: $2500 Dimensions: 14.1 x
Can anyone please tell me why I get the following error when I try to run my Java programs? The programs get compiled successfully but are not able to be run. I have.
Dec 1, 2016. This Java error is sometimes caused by a problem with the JREOPTIONS system option in the sasv9.cfg configuration file or by an incorrect file.
Stack Overflow is a community of 7.7 million programmers, just like you, helping each other. Join them; it only takes a minute:
This beginner Java tutorial describes fundamentals of programming in the Java programming language
Feb 28, 2017. FooFactory: Creating a Foo 2017-02-15 17:01:04,851 [main] ERROR com. stackifytest.logging.FooFactory: java.lang.NullPointerException at.
Solved: Hello, I'm trying to import a Jira backup to Jira 4.4 to the same server (my database got corrupted and I had to wipe it). I created an empty
What we have here is a simple CompletableFuture from Java 8 which we transform into the format we need with the help of thenApply() which allows us to add some data about the current. api.github.com/users/mchernyavskaya" }, "error":.
In computer programming, standard streams are preconnected input and output communication. Standard input is stream data (often text) going into a program. Standard error is another output stream typically used by programs to output error. In Java, the standard streams are referred to by System.in (for stdin),
Hello I am new to android development I don't know how to use JSON. I was trying an existing to make my own but while trying I was having errors and unable to send.
Sep 07, 2010 · GiMy web service gives the given error on the below why this happens
. or not the API call was a success and, if successful, include the requested data. The getErrors() result object will only be populated if the error is due to a failed. one or more validation errors indicating which parameters were invalid: Java.
Error 1603 problems include computer crashes, freezes, and possible virus infection. Learn how to fix these Java runtime errors quickly and easily!
Error Code 3259 Outlook 2011 Mac Gmail MS Outlook is such a great application that users prefer using it even if they work on Mac OS. To fulfill the need, Mac Outlook 2011 is developed that has the. I tried to send a large file via a Gmail account and received the error, ". Gmail Error 1026. an "undocumented feature") in Outlook
Java Help Center – Get help for Java and running java applets. Java.com. Download Help. When error messages specifically include terms such as JRE, JVM and Plug-in, we retain them.
|
OPCFW_CODE
|
What linux driver do i use for Mandrake 9.1????
Im pretty new to linux. What driver do i use for mandrake linux 9.1??? and is there a guide on how to install it?
The ones i see are:
I am running a celeron 900 with a geforce 3
like i said, im new to linux, so i am not sure how to install the driver.
Thanks in advance for your help.
is is dependant of which kind of hardware you have:
let me explain:
- Linux Intel Architecture 32bit CPU : most models (also 32bit AMD CPU)
- Linux Intel Architecture 64bit CPU : very pricy 64bit CPU; very doubtly that you have that in your possesion
- Linux AMD architecture 64bit CPU : less pricy 64bit CPU; also very doubtly that you have that in your possesion
- FreeBSD is a unix derived system ; very doubtly that you have such a system in your possesion
- nForce Drivers : if you have a nForce Chipset driver, you should go there, i think that these are exclusively chipsets on the mobo..
How do i install it now???
please download the driver and read the README file, you'll have to do it someday, believe me...
btw: when installing nvidia driver, please shut down X the right way:
log in as root
/etc/init.d/dm stop (kills X)
then install nvidia driver and modify /etc/X11/XF86Config-4 (with vi (press INS for editing; press ESC :wq ENTER to save and exit) for example)
then bring X up again:
I am completely lost...... is there a detailed step by step guide somewhere with screenshots or something? Remember, im a newbie to linux, only had it for a week so far. I usually use 2000 pro server.
Thanks for any help anyone can give me... I still dont know how to do this logging into root except when it asks me too, or how to do those commands you listed, and how to edit that file.... I have no idea
Thanks for your help.
killer although i have been using GNU/Linux for a little longer than u, i was in the same shoes as you...i relied heavily on win2k pro, but i have always had an old piece of **** laying around with text based rh7.1 on it...which i screwed around with..but anyways, the point of this story is, is that in order to learn you must read read read read.....now, aliens explanation is pretty straight forward, but the use of VI is brought into his explanation....now if you never used it before it will be tough to use...althought he did tell you exactly what to press i find it hard as hell to use, and i personally like using pico, and if i recall right that isn't in mandrake..but anyways d00d, just read....the readme file is very easy to follow...if you just pay attention to it and not rely on others....you start relying on others when what you tried doesn't work....anyways i wish you luck getting it to work :P
how do i find this readme? I downloaded the driver, and it is only one file..... I cant find the readme for it anywhere...
ahah, it's a real newbie
PLEASE READ this 20 times before trying it out!!!!!!!!! I'm NOT JOKING!!!
logging in as root is quite simple, but you have to learn how to use a konsole...
open up a konsole
and with 'su' you can log in as root
but when you have to install the nvidia drivers a konsole won't do you any good.
you have to go to text mode (which is done by pressing CTRL-ALT-F1 (2,3,4,5,6) pressing CTRL-ALT-F7 will put you back to graphical mode..
go to text mode
try loggin in as root and familiarize yourself with 'vi', its an editor.
remember if you don't know how or what, try 'man <command>' which gives you info about the command. and you'll need it.
now when in vi, you can't just edit a file, first press INS to edit a file and when done press ESC then you can do this: ':wq' which Writes and Quits, got it?
now go to text mode, log in as root, kill the X-server ( execute command 'telinit 3' ), install the already downloaded nvidiadriver by 'sh NVIDIA......' )
and when done edit the config file ('vi /etc/X11/XF86Config-4') edit nv to vnidia and comment out dri, glcore, and add 'load "glx" ' to it. save and start graphical mode again (init 5).
now which distro you said you were using???
Im using mandrake 9.1..... with the KDE desktop..... I think i understand so far what you are saying... just kinda terrified i am gonna screw something up.
sorry, it wasn't the intention, but you have to read the manuals or the readme files and get the feeling of vi, before anything really, you never know that hard drive crashes and you're stuck with the emergency shell to make the hard drive better, or just anything else....
if you know vi, it is quite easy to use, just don't forget pressing INS after entering vi, because your text will be screwed up, but you can always exit without changes with ( :q! ), try vi out in a konsole first.
it's nothing when everything works, but you never know that you are a owner of a KT400 mobo and X won't start and you have to get back to graphical mode (you can always use links or lynx for browsing, but without graphical mode, it's not that easy...)
|All times are GMT -5. The time now is 02:13 AM.|
Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.
Copyright ©1998 - 2014, nV News.
|
OPCFW_CODE
|
November 28, 2003 - The file has been updated. Be sure to get the new version!
:: A Hello ::
A lot of you are building Windows XP Unattended CDs, but if you are like me, you've done this to simplify installation. Stripping out default features is an incredable plus, as well! But, if you're like me, you really liked adding your own features to the installation routeen. Now Winrar installs itself, as does every patch, and most all program updates.
But I'm old school, and as much as I like my automated updates, I also like my options, dammit!
:: A Script ::
So I created Windows Post-Install Wizard (See Attached).
Using this Hypertext Application, you can add whatever options you wish. The Readme is incomplete, and I consider this a "beta", but most everything is there. I've tested this at 800X600, which is the resolution that Windows XP will switch to after the install, and everything appears to fit. If you modify it, bare that fact in mind!
As HTA files are not executed directly by windows, you will need to add a CMD file to your GUIRUNONCE.
[GuiRunOnce] %systemdrive%\install\wpi.cmd %systemdrive%\install\install.cmd %systemdrive%\cleanup.cmd
If you want a cleanup.cmd, place it in the $1 folder, NOT the install folder. Mine looks like this:
@Title Finishing Installation... @if exist c:\install rd /q /s c:\install @if exist c:\drivers rd /q /s c:\drivers @del %0
My setup calls WPI.CMD, which contains these lines:
start /wait %systemdrive%\install\WPI\WPI.HTA
To run, execute the HTA file from Windows 2000 or XP. Windows 98 should run it, but I don't guarantee, nor recommend it.
:: A License ::
I've chosen to make this public, because it would be a shame to keep it cooped up. I encourage people to hack it, customize it, and make it kick a**.
:: Addendum ::
Please, do not email me. I developed this, and I'm happy with it. If I add features that I feel will benifit other people, then I will post updates. If you create good addons, post the source code in this thread, so people can select what options they like, and add it to their source. This application is not geared for casual, "normal" people. But then, what "normal" people would customize their windows install
:: Finally, a Warning ::
If you have Norton Anti-Virus's script blocker feature on, then this will detect virus-like activity. This is because this file will write to %systemdrive%\install.cmd. I've made the script files as easy to browse as possible, so feel free to dissect this in any way you want. The HTA File is HTML, and it calls the JS files. The longest one is Generate.js, which creates the install.cmd file.
This is my first publically available, useful program that I've ever released! I hope it's a success
Number of downloads: 4263
|
OPCFW_CODE
|
Hi, how to Redirect a domain to another domain? Can it be done using DNS only?
What kind of redirect you want 301 or 302
what kind of DNS? cloudflare or something else?
Permanent redirect, 301!
I can use cloudflare but the domain is from GoDaddy.
There is a domain forwarding option in Godaddy. You can use that and godaddy will handle the redirect.
If you want more control over the redirect then you can configure domain on cloudflare as well.
Forwarding with masking is the weirdest form of redirect.
It will continue to show the old domain name but will internally take you to the new domain.
And in around how much time we can see the results from a domain redirection? Do you have any idea about this?
It should happen instantly in most cases but feel free to PM me in case you have any doubts.
I would recommend a simple way. (step-by-step, no step missed)
- Login to Cloudflare
- Add your old domain
- Select free plan
- Update nameservers at your domain registrar.
- DNS propagation may take sometimes.
- Assuming, Cloudflare is active
- Go to DNS, create two A records namely
- Point to your new domain hosting IP address
- Go to Crypto > Turn on Flexible SSL (To have free of cost SSL support for the old domain)
- Go to Page rule
- Create a Forwarding rule
Set HTTP status code
302 whatever you need.
- Apply and save.
The wildcard redirection will start working immediately.
I think, this is better option than any other. Once I had used GoDaddy forwarding. It works but there are two catch.
- Some useless redirect chain
- No HTTPS support (by default)
Thanks for this! I’ll try.
And I don’t want to redirect any visitors or anything from one domain to another. I just want the SEO effects so HTTPS is necessary in this case?
If you were using HTTPS for your old domain? Think logically, how Googlebot will reach? Without HTTPS? No. They still need HTTPS support.
And how to check this? I am redirecting an expired domain! So, I don’t know if SSL was used before or not. I just want to redirect it so that I can see the effects of backlinks from old to the new domain.
In any case, having HTTPS support won’t harm. It’s best for accessibility.
Please pay attention to the rule. It is suggested that
This match http/https/www/non-www everything
Consider something, match viewer case. Whatever visitor enter, next rule will follow.
Finally, here you are free to adjust http/https/www/non-www. Then
$2 means, matching URL pattern with both stars (including permalink, not just homepage.)
I hope this clarifies.
Oh okay! Thanks. I’ll try this now.
You’re most welcome!
Wrong. $2 will mean it will inherit the directory structure of the second Asterisk (*)
Yes, this is what I am saying.
If there were case to keep
http://old.com/* then… I might keep
It is based on Regular expression.
and if You make a rule like:
*.example.com/* -> https://www.example.org/$1/$2 then it will do the following: you visit help.example.com/supportticket.php it will take you to example.org/help/supportticket.php
If the new domain is on blogger, Do I need to point to the four Google IP Addresses here?
|
OPCFW_CODE
|
Deadlocking in Jira Service Management when frequently updating the same issue
If you have an automated process that keeps updating the same issue many times, it might lead to deadlocks after you upgrade to Jira Service Management 4.3 or later. Read on to identify whether your Jira Service Management instance is affected.
In Jira Service Management 4.3, we’ve fixed two issues to improve the overall performance. One of the changes included bounding the thread pools, so limiting the number of concurrent threads:
Our tests have shown significant performance improvements across whole Jira Service Management. However, bounded thread pools can lead to problems in some cases.
We’ve noticed that a bounded thread pool can result in a deadlock in the following scenario:
An instance has
An instance has an automated process that keeps updating the same issue (many times in one minute)
To check if your Jira Service Management is affected, you can run the following query periodically during peak times:
select p.pkey, i.issuenum, issueid, count(*) count_updated from ( SELECT g.issueid, g.created as date FROM changegroup g -- all issue edits UNION ALL SELECT a.issueid, a.updated as date FROM jiraaction a -- all comments ) as all_events join jiraissue i on i.id = issueid join project p on p.id = i.project WHERE date > now() - interval '1 minute' group by 1, 2, 3 order by 4 desc;
select p.pkey, i.issuenum, issueid, count(*) count_updated from ( SELECT g.issueid, g.created as ddate FROM changegroup g -- all issue edits UNION ALL SELECT a.issueid, a.updated as ddate FROM jiraaction a -- all comments ) all_events join jiraissue i on i.id = issueid join project p on p.id = i.project WHERE ddate > CURRENT_DATE - interval '1' minute group by p.pkey, i.issuenum, issueid order by 4 desc;
If the query shows that your Jira Service Management is updating any issue many times per minute, your instance may be affected by this issue. Tests have shown that up to 60 updates per minute on a single issue shouldn’t be a problem.
A sudden spike in the number of updates for an issue, which exceeds the number of threads in the thread pool, might also result in a deadlock. Such a deadlock will be resolved eventually, but some issues might end up with a corrupted SLA.
Another query that may indicate your Jira Service Management is affected by this issue is the following:
select * from "AO_319474_MESSAGE" where "CLAIMANT" = NULL and "CLAIM_COUNT" > 0;
If the query shows there is a small number of events unclaimed but with a high claim count, it may indicate that your instance is affected by this issue.
Jira Service Management 4.9 and above
In Jira Service Management 4.9, we've improved the reliability of SLA processing. These changes are hidden behind a feature flag, so if this problem occurs, enable the feature flag
sd.internal.base.db.backed.completion.events as per the steps in this KB article https://confluence.atlassian.com/jirakb/enable-dark-feature-in-jira-959286331.html.
Jira Service Management 4.3 - 4.8
If you are on a Jira Service Management version between 4.3 and 4.8, you can fix this issue by making changes in the database.
Run the following query against your database to check if the
select * from propertyentry where property_key='sd.event.processing.async.thread.pool.count'
Complete one of these steps, depending on whether you have this property or not.
If the property doesn’t exist, use this query. Take into consideration that the default value is 5.
//This gives the id to use in the next queries. select max(id) + 1 from propertyentry; insert into propertyentry(id, entity_name, entity_id, property_key, propertytype) values (<id from previous query>, 'sd-internal-base-plugin', 1, 'sd.event.processing.async.thread.pool.count', 3); insert into propertynumber values (<id from the first query>, <new pool size value>);
If the property exists, use this query.
update propertynumber set propertyvalue=<new pool size value> where id=<id present in the propertyentry table>;
sd.event.processing.async.thread.pool.countto a value not greater than the number of available threads on a node should improve throughput performance. Any larger value will very likely not result in further performance improvements.
OffThreadEventJobRunner to a large number can lead to one of the problems that we were trying to solve in the first place, so you’ll need to increase the number of available database connections as well.
To increase available database connections, see Tuning database connections.
|
OPCFW_CODE
|
Re: Semantics (was Re: Inheritance in XML [^*])
[last things first] Tim Bray wrote: > And finally... words are only of use in facilitating human > communication when there is some shared understanding as to their > denotation and connotation. The term "semantic", judged by this standard, > has clearly and empirically lost its usefulness in this discussion. I think that this discussion is important, because I remember how confused I was in the early days when I tried to understand this same distinction in SGML. "SGML doesn't supply semantics, just syntax." So ID/IDREF was "syntax", but HyTime links are "semantic." What??? XML has the same problem. This is not only confusing, but damaging. Suggestions for improvement can be dismissed: "that's not syntax, that's semantics" as if there were a clear line between the two, and as if XML wasn't already straddling the line (even if we decide that it is fuzzy). For example, as james points out, discussion of subtyping and inheritance is meaningless in a language with no semantics. But we seem to agree, now, that DTDs have a semantic, so we can stop beating that particular horse. Let me suggest this definition for semantic: a mapping from a syntactic feature to an abstraction. A language specified entirely in BNF does not have a semantic. A language specified at least in part in prose *might*. I argue that XML does: Tim Bray wrote: > > Well, we just have a difference of perception. I think that > "element", "element type", "notation", and so on are profoundly > *syntactic* constructs. I think an element is a piece of an XML > document that is bounded by tags; Okay, let's work from that definition. This is an element: <ABC FOO="DEF">foo</ABC> Now I make an XSL rule (or DOM query, or XLL link) that works on "elements of type ABC" (according to the XSL spec.). Is the XSL spec. going to define how to get from the text above, which is syntactically an element, to an abstract object of type "ABC" with an attribute with name "FOO", value "DEF" and content "foo"? The fact that these other specs speak of "elements" and "element types" indicate that the people who make these specs. consider these things to have been defined not only syntactically, but as abstractions, in the XML REC. In other words, the string above isn't just "in the language", or "out of it." It has a particular interpretation *under it*. It describes an abstraction. Who defines the mapping from the sequence of characters to the abstraction that these other specs work on? I say that the XML spec. defines this mapping, for two reasons: #1. Everyone seems to think it does (including other people in the W3C, the editors, the people who invented SAX and so forth). Nobody is going around defining how to get from element syntax to element abstractions, so they must think that the job is already done. #2. The XML spec. *itself* uses that abstraction. How else can XML check an element against the content model and attribute constraints defined in its "type"? I suppose that there is such a thing as a completely syntactic "abstraction" (e.g. Lisp S-Expr), but it's stretching it to claim that XML is defined this way when you take into account point #1. The abstractions "persist" after the document has been validated -- they are the result of the process. Paul Prescod - http://itrc.uwaterloo.ca/~papresco "Perpetually obsolescing and thus losing all data and programs every 10 years (the current pattern) is no way to run an information economy or a civilization." - Stewart Brand, founder of the Whole Earth Catalog http://www.wired.com/news/news/culture/story/10124.html xml-dev: A list for W3C XML Developers. To post, mailto:xml-dev@i... Archived as: http://www.lists.ic.ac.uk/hypermail/xml-dev/ To (un)subscribe, mailto:majordomo@i... the following message; (un)subscribe xml-dev To subscribe to the digests, mailto:majordomo@i... the following message; subscribe xml-dev-digest List coordinator, Henry Rzepa (mailto:rzepa@i...)
PURCHASE STYLUS STUDIO ONLINE TODAY!
Purchasing Stylus Studio from our online shop is Easy, Secure and Value Priced!
Download The World's Best XML IDE!
Accelerate XML development with our award-winning XML IDE - Download a free trial today!
Subscribe in XML format
|
OPCFW_CODE
|
My goal here is to read from one directory and write out to another. The database is running version 12.0.1 build 3592. I think I have already addressed any permissions issues as I can successfully read and write into both folders. The source folder table contains 94,000+ entries of various folders, subfolders, and files
I am trying to use a procedure that takes a documentID, reads the path from one of our link tables, pulls the contents of the file with that same path and inserts those contents with a new name into the destination table, writing the file to the folder.
The first issue that I ran into is using "...where file_name = XYZ " to compare the filename to a string (file path) crashes the database. I have not checked to see if this is addressed in an EBF.
So I am instead using the like comparison. When I run the select statement with the like comparison it returns the results instantly:
select first '87test.jpg' as file_name, SourceFile.contents as contents from ClientData as SourceFile where SourceFile.File_Name like 'Long Client Name\2012\Classification\Long Address\Images\testpic.jpg' ;
But when I use the same select with an insert (either with auto name or specifying the columns) it does not finish. I have let it run for 20 minutes without a result. I added the "first" to the select to try to make it faster, but it does not seem to help.
insert into clientdataland (file_name, contents) select first '87test.jpg' as file_name, SourceFile.contents as contents from ClientData as SourceFile where SourceFile.File_Name like 'Long Client Name\2012\Classification\Long Address\Images\testpic.jpg' ;
This statement will run quickly, but I assume it is because the file name is not as long as the previous one so the like performs more quickly:
insert into ClientDataLand with auto name select first 'NewFileName.pdf' as File_Name, Contents from clientdata where ClientData.file_name like 'Scan 001.pdf'
As it seems the like with a long file name is causing the issue I would rather use the = , but that crashes the database.
This is how I create the directory servers:
CREATE SERVER ClientDataSRV CLASS 'DIRECTORY' USING 'ROOT=D:\Data\ClientData\;SUBDIRS=100;CREATEDIRS=YES'; CREATE EXTERNLOGIN user1 TO ClientDataSRV; CREATE EXISTING TABLE ClientData AT 'ClientDataSRV;;;.'; CREATE SERVER "ClientDataLND" CLASS 'DIRECTORY' USING 'ROOT=D:\Data\DestData\;SUBDIRS=100;CREATEDIRS=YES'; CREATE EXTERNLOGIN user1 TO ClientDataLND; CREATE EXISTING TABLE ClientDataLand AT 'ClientDataLND;;;.';
With all that background, my questions are how do I make the insert (or like comparison if that is the real issue) perform faster, and is the = issue a bug.
Just as another hint:
As you have the documentID and can access the document path, wouldn't it be easier/faster to use xp_read_file()/xp_write_file() to copy the file contents between source and destination?
answered 12 Sep '12, 17:50
|
OPCFW_CODE
|
Since I upgraded to 164, my clients, which use ipfire as their dns, fail to get an ip address for several sites (including ipfire.org). I tracked it down to a problem with the DNS in ipfire. When switching to the Domain Name System page on the GUI, it normaly shows “working”, but when i hit “Check DNS Servers”, it takes a long time and afterwars the page shows “Broken”. Hovering the mouse cursor over the “error” message in the status column, the popup shows several messages like this:
truncated reply from n.n.n.n@53(UDP), retrying over TCP; connection timeout for n.n.n.n@53/TCP)…
where n.n.n.n stands for any DNS I try. These are my ISPs, but 220.127.116.11 or 18.104.22.168 are no better. Obviously, DNS is not completely broken, but does not get all input, hence the effect that some adress resolution works, but others not. I changed my pc to use the ISP’s DNS directly. This way I could write this message, but I lost my local name resolution this way and all my portable devices still have a problem. What can I do?
red us attached to a cable modem connected by ethernet to my cable provider, which is part of the swiss quickline the modem is configured in bridge mode and provides a static ip-adress to ipfire by dhcp
blue is the internal wifi adapter of the firewall
orange is a configured ethernet segment, but there are currently no hosts on this segment
green is an ethernet segment, connected to a switch that provides the internal connections
I rebooted everything completely (power off) including switches, but the problem remain
I’ll post output of iptables -L -n -v after completing this post, same with the screenshots and the exerpt from messages (don’t want to loose this text by screwing something up - don’t know this editor well)
I don’t see any warning when executing commands either with the web interface and/or ssh. I executed dmeg on ssh and do not see something disturbing me, but anywey: here is it’s output:
Same issue with my IPFire installation. Just updated to 164 and all DNS traffic stopped working after I switched on the new function. Even deactivating and restoring the backup before the update did not help. Currently the clients can use DNS by modifying the DHCP server’s DNS option to use the ISP DNS servers.
But all the internal DNS clients are not reachable now. Is there maybe any quick workaround on the command line available to make DNS usable again?
Geoblockiing and/or Intrusion Prevention definitely has something to do with the DNS Problem. I had both enabled when doing the upgarde to 164. Now I disabled both and rebooted - And DNS works again. Of course I’d like to have both GB and IPS back. So if I can help to investigate things further please let me know.
Can you confirm that you are using the Talos VRT rulesets in your IPS setup. If you remove those rulesets and use the Emerging Threats ruleset and turn on IPS does your DNS then work.
I am running CU164 with IPS using Emerging Threats and Abuse.ch rulesets and my DNS is working fine (TLS based).
It could be that the IPS problem with the Talos VRT rulesets mentioned in an earlier post could be causing an interference with the DNS.
Talos VRT ruleset seems to be the problem with 164 on my system. I re-endabled IPS with abuse.ch like suggested, and DNS kept on going. Then I turned on Geoblockin again, and DNS still seems to work - At least “check dns servers” immediately returns with a green OK. I’ll let it run now for a while and have an eye on it. @pmueller: This is an update: After 16h and a reboot, all is still going well, and there is another thing: ipfire reacts much more snapier again since I removed Talos ruleset. I did not recognise this immediadely, but with 164 and Talos ruleset, not only DNS experienced problems with truncated replies from upstream DNS servers, but the whole system reaction was slugish. This is gone now.
|
OPCFW_CODE
|
[Oberon] Oberon-2 cross compiler source for INMOS T800 Transputer available now (OP2/V4)
lab.eas at gmail.com
Fri Nov 29 19:04:50 CET 2013
I don't appreciate 'these-days-twitter-style' minimal content burps.
So here is some contents:-
ETHO is much less about the language, than the unsurpassed total OS, where you
see everytihing on the single screen, without the need to bob-yo-head upNdown
between keybrd & screen.
LinuxEthOberon builds on this to newly discovered extremes.
My original 40GB IDE for ETHO from the 90s started failing; and I have some
valuable legal files on some of the 48 partitions.
LNO can read these, on the same PC that's running Linux for other tasks.
LEO is more convenient/versatile than LNO, but I was astounded that LEO
can ALSO read the AosFS partitions of old:nativeOberon.
It doesn't directly fetch and display file contents.
But you can: from linux -> running LEO -> run any *nix command.
And of course the *nix command repetoire is massive.
This is linux leveraging LEO leveraging Linux!
So with 'try' being a symbolic-link to an AosFS partition on the
old NativeOberon disk something like:
read the whole NO partition &
delete all non-ascii-chars &
fold the lines < 80-char-len &
print all lines to a file, which contain the critical name that we're
allows you to search the whole partition, and extract the critical text.
Previously I remember examining the AosFS structure.
IIRC Partitions.Mod has got nice clear code, and it's not difficult to extract
the dir & file structure. So the actual files could perhaps be accessed.
I wonder what would happen by 'porting' Partitions.Mode to LEO?
--- Here's actual logged code and results including my errors:--
>> -> System.Execute cat try | tr -d \\200-\\377 | hexdump -C | head
>> == looks OK. So count lines containing "Heus". Let's see line-count ?
>> -> System.Execute cat try | tr -d \\200-\\377 | fmt | grep Heus | wc -l
>> ==1 <- ! I can see *MANY* "Heus" in many of the different 'folded' lines!
>> -> System.Execute cat try | tr -d \\200-\\377 | fmt | wc
>> == 1000653 9864738 72189840 <- 100Klines for 72MB
>> What am I doing wrong, to fail to `grep <all lines containing "Heus">?
> Your 'tr' command is leaving behind a lot of non-printing characters in
> the \000 to \037 range, which is going to cause 'grep' to report
> "non-ASCII" and may be seriously confusing the 'fmt' command. You do
> need to preserve formatting characters such as \n, \t, \r, but you should
> delete the others. Try this:
> tr -d \\000-\\007\\016-\\037\\177-\\377
> That leaves the just the range \010 (BS) through \015 (CR) and the
> ASCII characters \040 (SPACE) through \176 (~).
OK, afterwards I thought of the (chars < Octal_37), like <bell>.
I'll test and file-away your tr-version.
Perhaps `strings` is appropriate.
---- end of USEnet query:-------------
For non*nix users:
cat try | tr -d \\200-\\377 | fmt | grep Heus | wc -l
means: output file:try [the whole partition] &
delete all chars between Octal(200...377) &
use the `format` filter to <clean up> the lines of text &
output only lines containing "Heus" &
count and list newline, word, and byte of previous stage.
So with LEO, you get the superb HCI of ETHO, PLUS the massive facilities of
the *nix industry.
== Chris Glur.
Did I mention that it's free?
Provided you can detach yourself from the herd....
On 11/27/13, "ɹǝzıuıʇnɹɔs ǝoɾ" <scruty at users.sourceforge.net> wrote:
> written in 1990 by Stephane Micheloud.
> since 2013 there's a free open-source Transputer emulator; follow link.
> Oberon at lists.inf.ethz.ch mailing list for ETH Oberon and related systems
More information about the Oberon
|
OPCFW_CODE
|
The Excalibur OS is an OS that’s having mining management system, a system in which user can mine most of the famous coins.
IMPORTANT: By investing in this business you agree to our Disclaimer. All information including our rating, is provided merely for informational purposes. CryptoTotem does not provide investment advice.
What is Excalibur OS
Excalibur is an operating system designed for PCs and laptops. It is supposed to support application files from major OS like Android, Windows, Mac, iOS and Linux. Excalibur OS integrates an in-built virus protection system and anti-theft system. The solution also offers a Mining Management System that can mine and manage users coins.
The team have issued an ERC20 XOS Token. The token holders will be able to access premium services and be the first to test Excalibur Beta version that will be launched upon ICO completion.
DetailsPre-sales: Sep 03, 2018 - Oct 03, 2018
Public sales: Nov 20, 2018 - Dec 30, 2018
Pre-sale token supply: 200,000,000
Token supply: 1,800,000,000
LegalBlockchain Platform: Ethereum
Token infoTicker: XOS
Token price in USD: 1 XOS = 0.0055495 USD
Accepted currencies: ETH
Pre ICO: 50% bonus
1st Phase: 30% bonus
2nd Phase: 25% bonus
3rd Phase: 15% bonus
Last phase: 10% bonus
40% - Final Phase
20% - ICO 2nd Phase
20% - ICO 3rd Phase
10% - Pre ICO
10% - ICO 1st Phase
60% - Token for Crowdsale
20% - Excalibur
10% - Reserves
10% - Advisors
Excalibur OS Roadmap
Project Idea Was Proposed
The Team Was Build & Development Of Excalibur OS Was Started
Kernal Development Was Finished In The Month Of October
User Interface Got Ready In February.
Virus Protection System For Excalibur OS Was Built.
Team Started working on Majon Part i.e. All Platforms Power in a Single Platform.
Platform Development Was Finished
Mining Management System Development Was Started
Excalibur OS Got Official Existence.
PRE ICO In September 2018
ICO, & Testing on BETA Version
BETA Version of Excalibur OS will be releasing & token will be available for trading
BETA Version With Artificial Intelligence (AI) like SIRI & Cortana
The BETA Version with Next Update will be launched that is Virtual Reality.
|
OPCFW_CODE
|
exception on Ubuntu dotnet6.0
Environment: Ubuntu 22.04, dotnet6.0.100
sdk : Bin/net6.0 Aspose.PDF_22_6.zip (dll only)
code:
using System;
using System.IO;
using Aspose.Pdf;
using Aspose.Pdf.Text;
namespace pdf_toolbox
{
public class NumberPdf{
public void doNumber(){
Console.WriteLine("numberPDF....");
Document doc = new Document();
Page page = doc.Pages.Add();
page.Paragraphs.Add(new Aspose.Pdf.Text.TextFragment("Hello World!"));
doc.Save("HelloWorld_out.pdf");
}
}
}
log exception:
numberPDF....
Unhandled exception. System.ArgumentNullException: Value cannot be null. (Parameter 'key')
at System.Collections.Generic.Dictionary`2.FindValue(TKey key)
at System.Collections.Generic.Dictionary`2.TryGetValue(TKey key, TValue& value)
at #=ztrFjUrbJYJ47NcR$8LZPthQIUnYfExa7Gg==.#=zJGcmwbM=(#=z8S5r434= #=z844B2I8=)
at #=z2ZQc$Jcr34z9F64PqOoAxo6zJGdn_ZZShQ==.#=zYasEy4MKvAnG`2.#=zVigH4Ju$VQvV(#=z8S5r434= #=z844B2I8=, #=zmDMNKIw=& #=ztxptzkQ=)
at #=zQ_oajpj9GYnwC_x_$CeW5sq5gwYzvZnJTeNLm28=.#=z1VnSO5vegSOD(#=zD4i4y_M$HiiP_8I17jzHejk_l_WY #=zyphWEqwLFT3f)
at #=zFLKHFi4q9yQT1s7qN9F3Pv5T0nRo0qWzQmpbNnoq3YDi.#=zZyoz9Osy_H4G()
at #=zFLKHFi4q9yQT1s7qN9F3Pv5T0nRo0qWzQmpbNnoq3YDi.#=zMO8ucBo=()
at #=zQJkE9Lf6vNZzWxdK0PxKoHTE9hWkASZ5zNdndNZOH$KI.#=zDsDANj0=(#=zIrdqJtR4jlHtd_Q9LhynyGAYkowV #=zQ$qH$4U=, #=zLS0bXYESTwt6s9O9ge8mOCQdQu2XgiesZQ== #=zk6jw8Bs=, #=zk_baTkeGsSyYK2fUlwbWQSTWwiuje5rVhDZznXUVXMgE #=ziSVaJls=)
at Aspose.Pdf.Page.CalculateContentBBox()
at Aspose.Pdf.Page.#=z1bMZ4IQ=(Page #=zk6jw8Bs=)
at Aspose.Pdf.Page.#=zSmp$qf7iRTeUfaT1ow==()
at Aspose.Pdf.Document.ProcessParagraphs()
at Aspose.Pdf.Document.#=zG7VhpvNMzChd(Stream #=zB8p3Dxo=, SaveOptions #=zsRZvAOZNkW4o)
at Aspose.Pdf.Document.#=zFx2Xbgl0fHER(String #=zu4ufdYNMrcDG)
at Aspose.Pdf.Document.Save(String outputFileName)
It doesn't support net6.0 ?
Is anyone there?
I'm seeing this behavior as well
@xinyu391 @gregfullman
We apologize for the delayed response. Please note that we recommend and encourage posting such issues in our official support forum where you can track the issue resolution progress appropriately. Nevertheless, could you please try using Aspose.PDF.Drawing package from Nuget instead of Aspose.PDF? In case issue still persists, please create a post in Aspose.PDF Support Forum as requested before and we will assist you there accordingly.
Tested with docker, issue wasn't reproduced
Since there have been no updates or additional questions for more than half a year, the issue is considered resolved.
|
GITHUB_ARCHIVE
|
ambien 10mg prescription amounts
The condition is buy zolpidem and desoxyn tablets online not typically inherited from one's parents. After the start of orgasm, pulses of semen begin to flow from the urethra, reach a Order zolpiem in the uk peak discharge and then diminish in flow. Data from Morgan Stanley predicts that 2014 is the year that the number buy zolpidem and desoxyn tablets online of mobile users will finally surpass that of desktop users worldwide. In this stressed state, the glycosidic bond is more easily broken. Since the 1950s, the organization has spread towards Northern Italy and worldwide. It is manufactured in the Netherlands. Gas reinjection is the reinjection of natural gas into an underground reservoir, typically one already buy zolpidem and desoxyn tablets online containing both natural gas and crude oil, in order to increase the pressure within the reservoir and thus induce the flow of crude oil or else sequester gas that cannot be exported. Cutaneous anthrax is rarely fatal if treated, because the infection area is limited to the skin, preventing the lethal factor, edema factor, and protective antigen from entering and destroying a vital organ. Worldwide, around buy zolpidem and desoxyn tablets online 16 million adolescent girls give birth every year, mostly in low- and middle-income countries. Beginning in 2006, the Marshall School of Business will have a buy zolpidem and desoxyn tablets online San Diego satellite campus. More commonly, buy zolpidem and desoxyn tablets online crops are strip picked, where all berries are harvested simultaneously regardless of ripeness by person or machine. FTD was formerly a co-op, owned by its member florists. This can result in unrealistic expectations due to coverage of radical medical procedures and experimental technology. Most of the antibiotics used in meningitis have not been tested directly on people with where to purchase zolpiem online ireland meningitis in clinical trials. Walgreens, for example, uses satellite technology to share patient information. Online shoppers commonly use a credit card or a PayPal account in order buy zolpidem and desoxyn tablets online to make payments. The roots of lotus are planted in the soil of the pond or river bottom, while the leaves float on top of the water surface or are held well above it. Job sharing can also be a disadvantage if the employee cannot afford the part-time wages or benefits. As Ed arrives on time, a stowaway zombie attacks him, causing him to crash into the mall's central park. Nacho also has a straight job working at his father Manuel's auto upholstery business. Freeborn women of ancient Rome were citizens who enjoyed legal privileges and protections that did not buy zolpidem and desoxyn tablets online extend to non-citizens or slaves. Lamborghini V10, which was also developed under the Volkswagen Group ownership. In combat, auto-aim and a cover system may be used as assistance against enemies. Streptomycin, found in 1942, proved to be the first drug effective against ambien 10mg best price the cause of tuberculosis and also came to be the best known of a long series of important antibiotics. Nursing specialty certification is available through the Canadian Nurses Association in nineteen practice buy zolpidem and desoxyn tablets online areas. Naadam Festival is the largest festival, celebrated in every town and village across the country. Gaspar Silveira One of buy zolpidem and desoxyn tablets online six teams to have won buy cheap zolpidem tartate made in india onlone more than 130 games in a span of four seasons. With the compressed slug method, weight varies less between capsules. Evidence suggests that topiramate antagonizes excitatory glutamate receptors, inhibits dopamine release, and enhances inhibitory gamma-aminobutyric acid function. The main-effects model proposes that social support is good for one's health, regardless of whether or not one is under stress. Athletes have historically been willing to take legal and health risks to improve their performance, with some even stating their willingness to risk their lives, purchase generic zolpiem in singapore as buy zolpidem and desoxyn tablets online exemplified by research by Mirkin, Goldman and Connor in researching attitudes to the so-called Goldman dilemma. Dual vector spaces find application in many branches of mathematics that use vector spaces, such as in tensor analysis with buy zolpidem and desoxyn tablets online finite-dimensional vector purchase zolpidem spaces. Lee wanted to present new evidence showing buy zolpidem and desoxyn tablets online that he had zolpidem 10mg prescription korea fetal alcohol syndrome disorder, significant brain damage, and intellectual disability. These substances may interact with cyclobenzaprine:Cyclobenzaprine may affect the medications used in surgical sedation and some surgeons request that patients temporarily discontinue its use prior to surgery. Bachelor's buy zolpidem and desoxyn tablets online degrees at the University of Botswana normally take buy zolpidem and desoxyn tablets online four years. Other progestin-based implants that have been placed in animals include Norplant, Jadelle, and Implanon. Making a run out of the concert to save himself from the crowd's angst, Tommy goes and Order clonazepam tablets online uk hides inside a dilapidated structure. Supplementation is recommended to prevent deficiency in vegetarians who are pregnant. Instead, it maintained a system of taxation on the production that took place in the territories that they controlled, in exchange for protecting the growers and establishing law buy zolpidem and desoxyn tablets online and order in these regions by implementing its own rules and regulations. Mothers are often blamed for the birth of a female child. Although the bill passed through the Assembly and various committees, it failed by 2 votes on the Senate Floor. Audio porn can include recordings of people having sex or reading erotic stories. Conversely, the Xanax 1.5mg discount online risk of being robbed or blackmailed posed to clients of sex workers appears to be purchase generic ambien with mastercard much lower than many imagine. Many blame this drastic fall on the rise of herbivore men in Japan. They require patience and may be slow to house train, but in return, they can be quite comical, entertaining and buy zolpidem online in india caring companions. However, it is unclear if there is a cause and effect relationship. They work by affecting variables very close to the antidepressant, sometimes ambien tramadol affecting a completely different mechanism of action. USA; in terms of tonnes of production, it was the world's third-most popular medicine. For larger compounds the only metabolic mode that has shown to be effective is cometabolism. Benzodiazepines do not have any pain-relieving properties themselves, and are generally recommended to avoid in individuals with pain. This is a five stored building with 500 students capacity. At the time, the ability to vote was restricted to wealthy property owners within British jurisdictions. If an individual acquires a large injury resulting in extreme blood loss, then a hemostatic agent alone would not be very effective. Various social and economic factors are cited as playing a role in this trend. He was fired, however, following the sixth game of the season that ended in a loss to Vanderbilt. Coupons are usually issued buy zolpidem and desoxyn tablets online by manufacturers of consumer packaged goods or retailers, to be used in retail stores as part of a sales promotion. There is considerable evidence that males order zolpiem online in uk are hormonally predisposed to higher levels of aggression on average that females, due to the effects of testosterone.
Purchase alprazolam columbus Low cost ambien Soma drug name Valium 5mg to order online
|
OPCFW_CODE
|
I know you can't give any timescales, but are making the apps php8 compatible being worked on?
A client has forwarded an email from their hosting saying they are upgrading to php8 from the first week of November.
At the moment I'm thinking I'm going to hire a third party developer to create me a set of php8 compatible apps. This will leave me with a set I'll no longer be able to update and also out-of-pocket. But if this is the only option I'm prepared to do this.
Let me know either way.
Did you go ahead with this? Which apps aren't PHP8 compatible at the moment?, do you know? We do have until November 2022 until PHP 7.4 becomes unsupported. I've recently developed a Perch site which uses Shop, Blog, Members, Forms and all seems well on PHP 7.4 running Perch 3.1.7.
I'm hoping I can keep things running smoothly on 7.4 until either Perch is updated or I move all my sites over to Craft CMS. At least I've got a year to decide.
I'm currently in the process of getting php8 versions. I can't remember of the top of my head which ones aren't. A while ago I tested a few sites my upping the php in MAMP and it threw up deprecation warnings. The apps haven't been updated for quite a while, so I'm thinking they are all going to need looking at. The plan is to get a full set, so if I need to use any in the future I can.
Where are we with the SCA issue?
There was this post:
...with a version posted to a share box...link is now expired... What is the latest version of Perch Shop?
Site says 1.26b. Can you please let us know about this?
is there any news on this? This and PHP 8 compatibility surely count as technical support.
I can see that some people are feeling that they need to hire third party developers or work on Perch apps themselves to make them compatible with PHP 8. To my mind, this is not optimum and is a consequence of very little communication by the Perch core team. Perhaps the Perch core team would like to rectify this by commenting and giving an update on their own work to make the apps and Perch itself compatible with PHP8? It would surely be beneficial?!
George G Matt A Will B ??
We all know this isn't going to happen. At this point it's pretty obvious that they've taken more on than they can handle. They're either unable or unwilling to invest the time and money needed to create a viable product, and in the meantime what little users they had left are jumping ship, and I don't blame them.
It's a shame as I'm sure this isn't what Drew and Rachel wanted Perch to become when they sold it.
|
OPCFW_CODE
|
If you don't already have a server in your business, you're probably already using a desktop system as a server of sorts. Maybe it controls some files or printers that you can share with other PCs, but there are key differences between servers and desktops, and many good reasons to invest in a server for your small business. First and foremost, you need to understand these key differences between servers and desktop computers.
So let's look at some of those differences. On the surface, they seem very similar. Both have a CPU, RAM, and hard drives for storage. Servers, however, are designed with heavy‐duty back‐end tasks in mind, and aren't well equipped to run normal desktop workloads, such as graphics‐intensive applications. Servers excel at running services supporting those desktop applications, such as databases.
The CPU in an entry‐level server isn't all that different from a midlevel desktop, but does differ in some ways, such as cache sizes. To put it simply, CPU cache is a small, dedicated pool of RAM that the CPU can use to store frequently requested data. If the cache is larger, the CPU appears faster because it can store more data in the cache for faster recall. Server CPUs generally have larger and more varied caches than desktop systems for just this reason. Like modern desktop systems, server CPUs can have multiple cores, although they generally make better use of multiple cores than desktop systems will.
Multiple‐core CPUs are basically a single CPU that contains two or more processing cores. In essence, it’s like having several CPUs on a single chip. Using multicore CPUs can greatly increase the processing power of the system and lengthen the usable life of your server.
Some entry‐level servers, such as an Intel Celeron 445, may have a single‐core CPUs. For an office of fewer than 10 users, this is generally sufficient. Be aware, though, that a few extra dollars invested now in on a higher‐powered server ‐‐ say a dual‐ or even a quad‐core CPU ‐‐ can be greatly beneficial as your business grows. Overestimating your needs now might be your best bet.
One of the major hardware differences between servers and workstations is the disk subsystem. While desktops have a single hard drive, servers generally have several hard drives configured to appear as a
single disk. This is called RAID, or Redundant Array of Inexpensive Disks. RAID is widely used to protect servers from individual disk failures, critical when your business is at stake. If a drive in a RAID array fails, it does not mean that the data contained on that drive is lost, since other drives in the array still contain the data. There are multiple levels of RAID, but for entry‐level servers, RAID levels 1 and 5 are the most common.
RAID level 1 is a simple mirror of two hard drives. The data stored on a RAID 1 array exists on both drives at all times. If one drive fails, the other drive still has a complete copy of the data, and the server can continue to function. RAID 5 is more complex, involving at least three drives, and the setup can survive the failure of any one drive. Either option will protect against failures, but RAID 5 delivers more available disk space than RAID 1 and is faster as well. This data protection is crucial to a server, and any server you add should be equipped with a RAID controller and a RAID array. Make sure a hardware‐based RAID controller is in the server as well, not just a software‐based one ‐‐ software‐based RAID works, but can be more problematic for the uninitiated, since there isn’t a dedicated controller managing the hard drives. If you’re using software RAID only, you may run into difficulty in the event of a disk failure, especially if you’re not an expert at software RAID recovery procedures.
The disks in servers are also generally different than those in desktops. There's more to hard drives than just size ‐‐ the speed at which the platters within the drive spin can greatly affect performance of the entire server. Hard drives running at 10,000 RPM can deliver data faster than slower drives running at 5,400 RPM, especially under load. There are several different types of hard drives as well, such as SATA and SAS drives. For most entry‐level servers, 7,200 RPM SATA drives configured in a RAID1 or RAID 5 array will be sufficient. For higher performance, you may want to consider SAS drives. Generally, higher‐performance disks are necessary only if the server will be running heavier applications, such as a large database.
Some entry‐level servers offer hot‐swappable hard drives, while others do not. Hot‐swappable drives enable a hard drive to be replaced without shutting the server down or even opening the case. If a disk fails, it can be pulled out of the running server and replaced. When the disk is replaced, the RAID controller will then rebuild the RAID array to ensure that the data is protected. If the server does not have hot‐swappable hard drives, then the server must be shut down, opened, and the failed hard drive replaced inside the system.
RACK VS. TOWER
Entry‐level servers are generally available in either a rack‐mount or tower form. If this is the first server for the company, you probably won’t have a suitable server rack already in place, and for a single server, adding that rack may not be cost‐effective. Thus, a tower server is likely to be the best option.
Many tower servers can later be converted to rack‐mount servers with the appropriate conversion kit, so if your infrastructure grows to the point where a rack is required, your existing investment can be modified to fit your needs.
A tower server is generally only slightly larger than a regular desktop system, and can be placed under a desk or in an area that has little traffic. Though it’s a good idea to also have a monitor, keyboard, and mouse hooked up to the server, don’t be tempted to use the server as a desktop system.
A WORD ABOUT RAM
Servers also generally have very fast RAM, which is quite important for performance. Since servers are running many different tasks simultaneously, fast RAM and a fast system bus are crucial to maintain smooth operation. ECC, or error‐correcting, RAM is also a feature of most server‐class systems. ECC helps protect the integrity of the data stored in RAM during normal processing. It costs a bit more than non‐ECC RAM, but in a server system, it’s generally a requirement.
Another feature of most servers is redundant power supplies. This means that the server has at least two power supplies that both draw power during normal operation. If one power supply fails, or power is cut to that supply, the server continues to function. Not all servers have this feature, but some offer the option of adding another power supply later on. As with any computer, you should add a UPS (Uninterruptible Power Supply) to protect your server from power surges and outages.
THE STRIPPEDDOWN SERVER
Since servers aren't really meant to run desktop applications, their graphics subsystems are generally very basic. Many servers don't even have a keyboard, monitor, or mouse plugged into them, since they can be managed through the network. In a small business environment, however, you may usefully have a monitor, keyboard, and mouse on your server, especially if it's the only one. Most entry‐level servers are only slightly larger than desktop systems and can fit under a desk or in a corner. Make sure that there’s adequate ventilation and climate control wherever you place the server.
For connecting to the network, most servers have at least one gigabit network interface (the piece of hardware that talks to the network). These interfaces are different than the network cards in desktops since they perform certain network functions by themselves, relieving the server's CPU to handle more complex tasks. The end result is that they can push more data under load than normal network interfaces. If multiple interfaces are in the server, they can even be bonded together to provide greater bandwidth.
By adding a server ‐‐ even an entry‐level server ‐‐ to your network, you can benefit greatly from these features, especially considering the data protection available in a RAID array. Also, by adding a server, you reduce the likelihood of spyware and viruses impacting your business data, as long as you run strong antivirus software on the server.
For the most part, servers in small‐business settings are designed to run a large number of different services and applications to support a small number of users. Products like Microsoft's Small Business Server are designed to do just that. They offer many different functions that can run on a single dedicated server, but they can support only a relatively small number of users. In many small business environments, this is all that's necessary. And there's one more bit of good news: These days, entry‐level servers are available for nearly the same cost as midlevel desktops, making them a natural for small businesses.
|
OPCFW_CODE
|
pawinski.marek at gmail.com
Sun Jan 9 04:15:22 UTC 2005
>From what i understand you are telling the user to execute
"./configure" and "make" as root which to me is a security risk, i
would rather do it as user and "make install" as root.
On Sat, 08 Jan 2005 18:20:49 -0500, Temlakos <temlakos at gmail.com> wrote:
> On Sat, 2005-01-08 at 23:47 +0100, Maciej R. wrote:
> > Hello out there,
> > I wanted to download aMule but there are no RPM's for Fedora Core 3
> > (anyway - do you know good P2P tools?). Would the Suse 9.2 or FC2 ones
> > work on my FC3? What are the differences between RPM's for different
> > distros? Wouldn't it be easier to download a kind of EXE file for all
> > distros?
> > --
> > Maciej R. <m.mail at vp.pl>
> I don't pretend to be an expert; I've used Fedora for about a year now,
> and am on the third release. When I wanted to install an application
> that did not yet have an FC3 build, I first tried to work with the FC2
> build. It seg-faulted every time I tried to so much as load it. (It was
> GRAMPS, the genealogy system, which right now doesn't have an FC3
> build.) So I grabbed the tarball and ran the usual scripts (./configure
> && make && make install). And it worked, and continues to work.
> RPM's are fine--*if* they are built for your specific kernel and desktop
> environment or are at least within reasonable tolerances. If they
> aren't, the applications seg-fault away every time you try to load them
> or do anything significant with them. (Typically I get "signal 11,"
> which is a general segmentation fault. That usually occurs when you try
> to divide by zero, or--more likely--try to de-reference a pointer that
> in fact is pointing to nowhere.)
> The RPM repository keepers try their best to offer RPM's that will
> install properly on your particular release. Three of them (Axel, Dag,
> and Dries) are regular contributors to this list (and they might not be
> the only ones). If *they* don't have an RPM in their most-stable repos
> for any given application, then you probably are better off using the
> tarball (typically named application.releasever.tar.gz) and building
> your application directly from the supplied source code. Which is
> something you're not allowed to do in Windows ("WinDoze"), and that's
> another great feature of Linux.
> In short: check to see whether the application you want is available as
> a source-code archive, also known as a "tarball" (for TAR, the
> traditional Unix archive format). Most "tarballs" are not only packaged
> with TAR but are then re-packaged with GZIP--hence the double extension
> ".tar.gz". So if you find one of those, here is what you do:
> 1. Download this to your home directory or to your desktop--anywhere
> where you can get to it.
> 2. Start a Terminal window--you'll want to work with the command line.
> 3. Execute "su", for "Super User." When it asks for your root password,
> give it. You have just become "root" for this session.
> 4. Execute "tar -zxvf application.releasever.tar.gz" (Here replace
> "application.releasever" with whatever comes before the ".tar.gz" in the
> file's name.)
> 5. That process will create a new directory having the name of the
> application. Execute "cd application-releasever" or whatever.
> 6. Execute "./configure". Hopefully this should proceed without error.
> (If you get fatal configuration errors, you can't build this application
> on your system for some reason, and it will *try* to tell you.) Assuming
> the configuration step completes without fatal error:
> 7. Execute "make". The configuration script has already set things up so
> that "make" will follow some automatic scripts, called "makefiles," that
> direct the compiler and linker as to where to find various source files
> and libraries.
> 8. Execute "make install". This will copy your finished application
> program into the folder "/usr/local/bin". Thereafter, whenever you type
> the application name, your application will start.
> 9. Execute "cd /usr/local/bin" and then "ls" to read the name of your
> new application. You will need to type that name in the "Run
> Application" dialog to use the program.
> 10. Execute "exit" twice. The first time will change you back to
> yourself, and the second will end the Terminal session.
> 11. After you've used the program for awhile, and you're sure you won't
> need to remove it, you can log in as root long enough to remove the
> directory you created with step 4 above. I'd advise keeping that
> directory in place for one month. If you have to remove a program during
> that time:
> A. Start a Terminal window.
> B. Execute "su".
> C. Change to the directory you created in 4 above.
> D. Execute "make uninstall".
> E. Execute "make clean".
> F. Now you can either try steps 5-10 again, or else scrub your system
> of the tarball and the temporary directory.
> Temlakos <temlakos at gmail.com>
> fedora-list mailing list
> fedora-list at redhat.com
> To unsubscribe: http://www.redhat.com/mailman/listinfo/fedora-list
More information about the users
|
OPCFW_CODE
|
using System;
using System.Collections.Generic;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Bakery.Models;
namespace Bakery.Tests
{
[TestClass]
public class StoreTests : IDisposable
{
public void Dispose ()
{
Order.ResetIdCount();
Vendor.ResetIdCount();
}
[TestMethod]
public void Constructor_InitializesWithCorrectValues ()
{
Store store = new();
Assert.AreEqual(0, store.Inventory.Keys.Count);
CollectionAssert.AreEquivalent(new List<Order>(), store.Orders);
CollectionAssert.AreEquivalent(new List<Vendor>(), store.Vendors);
}
[TestMethod]
public void CreateVendor_CreatesNewVendorWithProperties ()
{
Store store = new();
Vendor vendor = store.CreateVendor("Test Vendor", "Description of Vendor");
Assert.AreEqual("Test Vendor", vendor.Name);
Assert.AreEqual("Description of Vendor", vendor.Description);
CollectionAssert.AreEqual(
new List<Vendor> { vendor },
store.Vendors
);
}
[TestMethod]
public void GetVendor_FindAndReturnsVendorById_ReturnsCorrectVendor ()
{
Store store = new();
store.CreateVendor("Test Vendor 1", "Description of Vendor");
store.CreateVendor("Test Vendor 2", "Description of Vendor");
Vendor vendor3 = store.CreateVendor("Test Vendor 3", "Description of Vendor");
Assert.AreSame(vendor3, store.GetVendor(vendor3.Id));
}
[TestMethod]
public void GetVendor_FindAndReturnsVendorById_ReturnsNullIfNotFound ()
{
Store store = new();
store.CreateVendor("Test Vendor 1", "Description of Vendor");
Assert.IsNull(store.GetVendor(748192));
}
[TestMethod]
public void DeleteVendor_FindsAndDeletesVendorById_RemovesVendor ()
{
Store store = new();
Vendor vendor1 = store.CreateVendor("Test Vendor 1", "description here");
Vendor vendor2 = store.CreateVendor("Test Vendor 2", "other description");
Vendor vendor3 = store.CreateVendor("Test Vendor 3", "some other description");
CollectionAssert.AreEqual(
new List<Vendor> { vendor1, vendor2, vendor3 },
store.Vendors
);
Vendor deleted1 = store.DeleteVendor(vendor2.Id);
Assert.AreSame(vendor2, deleted1);
CollectionAssert.AreEqual(
new List<Vendor> { vendor1, vendor3 },
store.Vendors
);
Vendor deleted2 = store.DeleteVendor(vendor3.Id);
Assert.AreSame(vendor3, deleted2);
CollectionAssert.AreEqual(
new List<Vendor> { vendor1 },
store.Vendors
);
}
[TestMethod]
public void DeleteVendor_FindsAndDeletesVendorById_DoesNothingIfNotFound ()
{
Store store = new();
Vendor vendor1 = store.CreateVendor("Test Vendor 1", "description here");
CollectionAssert.AreEqual(
new List<Vendor> { vendor1 },
store.Vendors
);
Assert.IsNull(store.DeleteVendor(410321));
CollectionAssert.AreEqual(
new List<Vendor> { vendor1, },
store.Vendors
);
}
[TestMethod]
public void CreateOrder_CreatesNewOrderWithProperties ()
{
Store store = new();
Vendor vendor = store.CreateVendor("vendor 1", "description");
Order order = store.CreateOrder(vendor.Id, "order 1", "order description");
Assert.AreEqual("order 1", order.Title);
Assert.AreEqual("order description", order.Description);
CollectionAssert.AreEqual(
new List<Order> { order },
store.Orders
);
}
[TestMethod]
public void CreateOrder_CreatesNewOrderWithProperties_OnlyAddsToCorrectVendor ()
{
Store store = new();
Vendor vendor1 = store.CreateVendor("vendor 1", "description");
Vendor vendor2 = store.CreateVendor("vendor 2", "description");
Order order = store.CreateOrder(vendor2.Id, "order 1", "order description");
CollectionAssert.AreEqual(
new List<Order> { order },
store.Orders
);
CollectionAssert.AreEqual(
new List<Order> {},
store.GetVendor(vendor1.Id).Orders
);
CollectionAssert.AreEqual(
new List<Order> { order },
store.GetVendor(vendor2.Id).Orders
);
}
[TestMethod]
public void GetOrder_FindAndReturnsOrderById_ReturnsCorrectOrder ()
{
Store store = new();
Vendor vendor1 = store.CreateVendor("vendor 1", "Description of Vendor");
Order order1 = store.CreateOrder(vendor1.Id, "order 1", "order description");
Order order2 = store.CreateOrder(vendor1.Id, "order 2", "order description");
Order order3 = store.CreateOrder(vendor1.Id, "order 2", "order description");
Assert.AreSame(order2, store.GetOrder(order2.Id));
}
[TestMethod]
public void GetOrder_FindAndReturnsOrderById_ReturnsNullIfNotFound ()
{
Store store = new();
Vendor vendor1 = store.CreateVendor("vendor 1", "Description of Vendor");
store.CreateOrder(vendor1.Id, "order 1", "order description");
Assert.IsNull(store.GetVendor(1123));
}
[TestMethod]
public void DeleteOrder_FindsAndDeletesOrderById_RemovesOrder ()
{
Store store = new();
Vendor vendor1 = store.CreateVendor("vendor 1", "Description of Vendor");
Order order1 = store.CreateOrder(vendor1.Id, "order 1", "description here");
Order order2 = store.CreateOrder(vendor1.Id, "order 2", "other description");
CollectionAssert.AreEqual(
new List<Order> { order1, order2 },
store.Orders
);
Order deleted1 = store.DeleteOrder(order2.Id);
Assert.AreSame(order2, deleted1);
CollectionAssert.AreEqual(
new List<Order> { order1 },
store.Orders
);
Order deleted2 = store.DeleteOrder(order1.Id);
Assert.AreSame(order1, deleted2);
CollectionAssert.AreEqual(
new List<Order> {},
store.Orders
);
}
[TestMethod]
public void DeleteOrder_FindsAndDeletesOrderById_AlsoRemovesFromVendorOrders ()
{
Store store = new();
Vendor vendor1 = store.CreateVendor("vendor 1", "Description of Vendor");
Order order1 = store.CreateOrder(vendor1.Id, "order 1", "description here");
Order order2 = store.CreateOrder(vendor1.Id, "order 2", "other description");
Order deleted = store.DeleteOrder(order2.Id);
CollectionAssert.AreEqual(
new List<Order> { order1 },
store.GetVendor(vendor1.Id).Orders
);
}
[TestMethod]
public void DeleteOrder_FindsAndDeletesOrderById_DoesNothingIfNotFound ()
{
Store store = new();
Vendor vendor1 = store.CreateVendor("vendor 1", "description here");
Order order1 = store.CreateOrder(vendor1.Id, "order 1", "description here");
Assert.IsNull(store.DeleteVendor(410321));
CollectionAssert.AreEqual(
new List<Order> { order1, },
store.Orders
);
}
}
}
|
STACK_EDU
|
At Capsens, we use the scrum methodology to structure our projects.
This methodology consists in developing our platforms by series of sprints which are relatively short periods allowing to iterate on the development of a project. These generally last one or two weeks, but at Capsens we have opted for one-week sprints in order to have a more reactive organization.
Each sprint revolves around a key meeting called a "ceremony" which helps structure the developer's sprints.
The name "ceremony" is chosen on purpose because each speaker has a specific role to play and the process is precise and always identical, as you will see in this article.
In this article I will try to explain to you the operation that we have put in place to make our ceremonies a success.
First of all, it is useful to understand who is present at this meeting and what their roles are.
First, there is the Customer. It is important that he is present at these meetings for several reasons because it allows him to:
Prioritize and interact with the work team,
Understand every development that is planned. We then avoid developing functionalities that could be useless or misunderstood,
Understand the various issues encountered during the last sprint and any delays,
Know the progress of the work throughout the project.
Then there is the Project Manager. He is the one who prepares and directs this meeting. This is one of the important tasks for which he is responsible.
There is, of course, the Developer. It is he who will develop all the functionalities mentioned during the meeting. It is therefore important that the latter fully understands each of the functionalities, but it is also essential that he actively participates in these meetings in order to contribute his technical expertise. In some cases, a feature may seem simple but involves a lot of technical complexities and takes up time that could otherwise be allocated. It is therefore a question of finding solutions together to satisfy the business side and the technical side.
Finally, there is the Project Master. This is a senior developer and technical manager of the project who proofreads the work that is carried out by the developer (whatever his level). At Capsens, we are committed to carrying out the most qualitative work possible and this notably involves various validation processes. Thus any work done by a developer must be proofread by another developer.
If the Master happens to be the developer who will implement the various functionalities, then the replacement proofreader must be present because he is the one who will proofread the work carried out by the master.
Now that you see who the members of this meeting are, let me explain what tools we use to identify all the features to be developed.
At Capsens, we use Trello.
This tool allows us to easily track the progress of the work and distribute the various tasks incumbent on each person.
On this board, there are 8 important columns and all the cards pass in turn in each of these columns. Here they are in order:
: column in which the Project Manager prepares the cards for the next sprints.
: column in which the developer sees all the cards he has to develop during the sprint.
: you can see the cards under development. In this column, there is always only one card per developer, because it is impossible to deal with two topics at the same time
: column in which is all the cards that have been completed by the developer and whose code must be proofread by the Master.
: column displaying all the cards to be tested by the Project Manager. Indeed, even after technical validation by the Master, the Project Manager must ensure that the functionality corresponds to what was specified.
: column displaying all the cards which have been validated by the Project Manager and which must now be validated by the customer himself. By integrating the client into the tests, we ensure that everything has been continuously validated on the project and that there will be no surprises at the end of the project.
To be deployed
: This column lists the cards which have been validated and which must be deployed in production.
: Here are the features that have been put into production.
: In this column are all the cards that have not passed all the validations and which must be corrected by the developer. Fortunately, not all cards go through this column.
Below is an example of a Trello board used for one of our projects:
Linear is a good alternative to Trello, but the tool is a bit more developer-oriented and less user-friendly for a client.
Now that you understand the stakeholders and the tool used to track jobs, you now need to understand the points system. At Capsens, a development day is worth 13 points and any feature can be worth between 1 and 13 points:
1 point: wording change
2 points: minor development
3 points: development with a minimum of reflection / research
5 points: one morning
8 points: one afternoon
13 points: a full day
It is important that a map be produced in less than a day because:
well-cut work is better specified
it is more motivating for a developer to deal with many small maps than with few large maps
we can test and deploy independent pieces of code without being blocked by an element
on the other hand, it requires a lot of specification time from the project manager. This is the price of quality.
After each functionality explanation and when the Project Manager has ensured that the specification is clear to everyone, we proceed to the estimation of the card. When we are face-to-face, we use "physical" cards numbered between 1 and 13. The participants give their estimate in heart and this can lead to discussions or reveal misunderstandings. Since the period of Covid-19 and the democratization of telework, we mainly use the Scrum-poker site for our estimates.
As the developer himself has a role of Master on other projects and must validate code from other developers, we cannot do sprints of 5*13 but of 4*13 points. This is why, at Capsens, we work with sprints of 52 points, which is the equivalent of 4 full days of work and 1 day for the ceremony itself and the rest described above.
Below is what a Trello card that includes the technical specification of an easy-to-implement feature looks like:
This meeting represents the backbone of the project and it is essential that everyone is involved and understands all the topics discussed. It is a moment of privileged exchanges between everyone and it is the responsibility of the Project Manager to ensure that this ceremony is, on the one hand, correctly prepared and, on the other hand, that everything is perfectly established at the end of the meeting.
To avoid missing a topic, the Project Manager begins the meeting by summarizing the past week with the blockages encountered, if any, and what has been accomplished.
He then does a quick column-to-column review starting at the end (i.e. what needs to be deployed in production) to finally get to the Backlog column and start explaining the features one by one.
When a ceremony is well prepared and framed, it does not need to last much more than an hour, an hour and a half. There is nothing worse than requisitioning the time of the various people present if it is not necessary.
You now understand the operation and the role of a scrum ceremony.
This meeting is the backbone of a project and allows not only to ensure that it succeeds in the most efficient way possible, but also to bring high satisfaction to the customer who is involved from the beginning to the end of the project.
|
OPCFW_CODE
|
Hi everyone!Another year of Hyper Dragon Ball Z in the bag! As a special treat from me to you, I've decided to release this sweet thing I had packed away. We've been working on this in total secrecy for a while now. So, today for my birthday, I welcome you all to give a try to Hyper DBZ's own Vegetto. He doesn't have an Emo mode nor Finishers, but other than that I think you'll find him fun. Hope you guys have a blast!I'll include the full Youtube video description below so I'm sure you guys won't miss it:"Happy Birthday to me, everybody! Here's my present to you all, something we've been brewing in secrecy...Vegetto/Vegito for Hyper Dragon Ball Z!XGargoyle has been coding this beast of a char. He's been the biggest cog in the machine that made this character, pulling late-nighters and even doing last-second updates and fixes! For his efforts, I think he deserves a little donation from you: https://www.paypal.me/XGargoyleJustNoPoint, Iced, Ethan and others have also chipped in. And obviously, I was on spriting duty! His voice is made up of Rice Pirate's Vegeta, copied by Alpha Proto's Goku and also mixed together by him. I didn't have enough time to put together a really great video, so I made due with some screenshots, I hope that's OK with y'all!Download him here: http://network.mugenguild.com/balthazar/char_vegetto.htmlIn order for some of our earlier characters to work properly with Vegetto, you'll have to redownload them, as we updated them. These versions will also give you a glimpse of how polished our upcoming 5.0 build is going to be, as these characters were updated with that version in mind. Also worth noting is that both Goku's have their new voice by Alpha Proto.Goku : http://network.mugenguild.com/balthazar/char_goku.htmlSSJ Goku : http://network.mugenguild.com/balthazar/char_ssjgoku.htmlGohan : http://network.mugenguild.com/balthazar/char_gohan.htmlInvite link to the Hyper DBZ Discord channel: https://discord.gg/xjaVqt9Big shout out to DaMarcus008, the guy who's been spicing up our game with very cool original CPS3 themes. Including the one you've been hearing in this video. You want to help Hyper DBZ? Subscribe to DaMarcus's YouTube channel:https://www.youtube.com/user/damarcus008/Also if you want to support my sprite-making, please become a Patron of mine: https://www.patreon.com/TheBalthazarOr a one-off donation is just as appreciated! https://www.paypal.me/RonnieBalthazar"
|
OPCFW_CODE
|
What free UML authoring tools do you use and why is it better than others?
ArgoUML - I use it for its simplicity.
Although it's not exclusive to UML, I use Dia. It has the symbols used in most (if not all) of the UML diagrams, but it also supports flowcharts, network diagrams, and a few other things that I've occasionally used as a software engineer.
I tried most of the aforementioned tools so let me state my opinion on it here:
- Dia - an old veteran; builds reliable charts (not just UML) but is rather cumbersome to use (especially if your diagrams get bigger :-( ) almost no restrictions on what to connect to each other, laying out diagrams nicely needs lots of manual adjustment (a serious time killer!), the dialogue boxes are hard to use (e.g. obsolete shortcuts such as alt+O for 'OK' to close it), navigating in a diagram wrecks your nerves with an incomplete endless sheet metaphor (scrollbars only work if one of your objects is out of the viewport; not all the time [like in Inkscape]) etc. etc.;
To sum it up: robust and reliable, but aged (esp. in terms of usability); I used it a lot (and wasted lots of time -rearranging my diagrams).
- StarUML and argoUML - just used them shortly as they only support UML 1.x; someone even wrote their thesis (in German) on StarUML's shortcomings!
- Visual Paradigm - new, intelligent, but the community edition is very limited: you'll get an ugly watermark if you create more than one diagram type per project; you can, however, easily circumvent this by putting all your diagrams into one and cut it up with graphics app later,
This is my clear recommendation; you just save so much time when creating diagrams compared to Dia!
- POPP/POI (Plain Old PowerPoint/Impress) - use your favourite office's graphics app! Dumb to the bones when it comes to what's allowed, but at least the connections flow nicely and aligning objects works like a charm!
Edit: 1/7/2013: The drawing component in Google Docs supports snapping and drawing. Still no "real UML tool", but works good enough and is easily shareable.
- Online tools such as gliffy.com - mostly nice, but no good for any serious work ;-)
- yEd - I just gave it a short try, but it seems as well suited as Visual Paradigm. Give it a try and see for yourself!
- Red Koda - Was recommended on StackExchange in an article asking for UML learning resources; also interesting in a broader sense!
Nota bene: You will find shortcomings (unsupported features, wrong layout etc.) in almost any UML tool you'll use. Thus, IMHO the drawing apps supporting UML shapes or snapping are still the most useful.
There's yuml which is pretty cool as it allows you to create UML diagrams online, with no tools and so easily embeddable in blogs, wikis, emails, etc.
I use Umlet. What I like with this software is that it's a drawing tool only. It doesn't force you to create and maintain a model, and it doesn't try to generate/parse code. Unlike a lot of UML tool I tried, I've always been able to draw the diagram I had in mind (the drawing primitives are quite complete, and they are extensible by code). It works well with my other tools : the text based format is ok for working with my VCS, and the png/svg can be generated using a command line (I use it to automate the build of my doc).
Jude Community is my first choice. Even though they're Astah now, you can still find jude community 5.2 over the web. If you never used I'd give it a try. PS: I personally didn't like StartUML.
Although they share some very common features and even though Jude is not developed anymore, I prefer Jude mostly because of its usability. I've used Jude some years ago for studying and have to work with starUML + VS UML Tool today (company requirement) so I've been an avid user of both tools. I find Jude way more friendly to use. That's why I said: I personally didn't like StartUML. Features compared I don't know how Jude would go, since we use starUML for documentation purposes only and Jude is discontinued. Regards.
It is full featured, open source and regularly maintained.
I also request the readers to visit BOUML Project status - https://stackoverflow.com/questions/3721008/bouml-project-status.
Personally, I like StarUML.
Very full featured and open-source!
From the website:
StarUML is an open source project to develop fast, flexible, extensible, featureful, and freely-available UML/MDA platform running on Win32 platform. The goal of the StarUML project is to build a software modeling tool and also platform that is a compelling replacement of commercial UML tools such as Rational Rose, Together and so on.
I use yEd when the idea/app is in its infancy and migrate up to ArgoUML when it needs more bells and whistles. Liked Visio, to some degree, but not enough to buy
I've used many of the really expensive ones and hated them all. I even resorted to using drawing tools in a number of cases, but that was very limiting and leaves you without many of the benefits of having a UML tool to begin with. Anyways, at my latest company they were using one I never tried, Visual Paradigm. I have to say that it is by far the best I've used. It still has its issues but it is about the only one that I actually like to use. Visual Paradigm does pretty much everything the really expensive tools do but at a miniscule fraction of the cost.
They have a free community edition, that I know is available for non-commercial use. I'm not sure if they limited functionality in any other way. If I recall correctly, you can buy a commercial edition for around $100 bucks. So if you need it for commercial purposes and your employer won't spring for that then I'd really be looking for a new job really quickly.
The only thing we haven't looked into yet is multi-developer support, which all the expensive tools are able to do. But from the web-site it seems like they support it.
I use a licensed version of Visual Paradigm at work. There is a free community edition that should be able to handle most of your basic UML needs.
|
OPCFW_CODE
|
Understanding Mixed and Native Domain Functional Levels
Understanding the three domain functional levels of Windows Active Directory is key to taking advantage of its advanced features
- By Derek Melber
Once you have Windows Active Directory, you must choose a domain functional level. The functional level controls which features of Active Directory are available. If you've been running in the functional level that limits Active Directory features, you might not know what you're missing. On the other hand, you might be wondering why other domains you see have more features and capabilities than yours.
First, let's define the functional levels:
- Mixed domain: A Windows 2000 or 2003 domain that has the ability to have Windows NT 4.0 domain controllers.
Interim domain: Same as a Windows 2000 mixed domain but no Windows 2000 domain controllers can be added to the domain.
Native domain: A Windows 2000 domain native does not have the ability to run Windows NT 4.0 domain controllers. A Windows Server 2003 native domain cannot include Windows NT 4.0 or Windows 2000 domain controllers.
These definitions most likely differ from those you have seen before. That's because they don't mention client computers. In fact, the domain functional level has nothing to do with the client or server operating system version. This means that all three domain functional levels can have Windows 95, 98, and NT Workstation clients. In addition, Windows NT 4.0 Servers can exist in any of the domain functional levels. This means that the domain functional level is solely dependent upon the domain controller operating system version.
Default Domain Functional Levels
When you install Active Directory (2000 or 2003) for the first time, the default domain functional level is mixed . This provides the most flexibility and backwards compatibility. If you are in this situation, you only have one option for moving the domain functional level: native. (The only way to configure an interim functional level is to upgrade from Windows NT 4.0 to Windows Server 2003, during which you'll be asked to pick the functional level you want.)
The move to native functional level is a simple one in terms of configuring the domain to move to this level. Before proceeding, verify which functional level you are working in:
Open Active Directory Users and Computers.
Right-click on the domain node and select Properties.
The result is a dialog box that shows you the domain functional level, as shown in Figure 1.
Figure 1. Domain functional level is displayed in the domain
properties from Active Directory Users and Computers
Moving to a higher domain functional level is just a couple of clicks away. To move the level up:
Open Active Directory Users and Computers.
Right-click on the domain node and select Raise Domain Functional Level.
The result is a dialog box that shows you the options that you can select for the new domain functional level, as shown in Figure 2.
Figure 2. You can raise the domain functional level from mixed
to Windows 2000 native or Windows Server 2003 (native)
Mixing Domain Functional Levels in the Forest
The domain functional level is unique for each domain. This means that a single Active Directory forest can have a mixture of domain functional levels. For example, if the first domain that creates the forest is a mixed functional level domain, the child domains of this domain can be at either a mixed or native functional level.
This is possible because the domains don’t share anything that would break the forest if there was a mixed and native functional level domain working together.
Active Directory Database Size Limitations
One of the main reasons that you might want to move to one of the native functional levels is that the Active Directory database is not limited to the old Windows NT 4.0 size limitations. Windows NT 4.0's initial database size limitation was 40MB. This could have been expanded, but the upper limit was about 200MB. With a large domain, you can use up 40MB very quickly with user, group, and computer accounts. Group Policy objects, organizational units, and other Active Directory objects also play a factor in the size of the database. Therefore, you may need to move to a native-level domain to accommodate all objects.
Once you have a native-level domain, the Active Directory database can expand to over one million objects, which can be expanded further with help from Microsoft.
At mixed- or interim-domain levels, the groups that you have to work with are limited. Since the Windows NT 4.0 domain controllers are included in the domain, the groups must adhere to the rules that Windows NT 4.0 domain controllers know, including:
Universal groups: These groups can only be configured as Distribution groups, not as security groups. This means that you can’t add universal groups to an access control list (ACL) of a resource.
Domain Local groups: These groups are only visible to the domain controllers.
Also, groups must follow group nesting rules from Windows NT 4.0: global groups can only have users as members, while domain local groups can contain users and global groups.
As soon as you move to a native functional level, these rules change. This means that universal groups can be security groups, and that domain local groups can be seen by all computers that have joined the domain. Finally, groups can have “like group nesting.” For example, domain local groups can contain other domain local groups.
Remote Access Permissions
When a domain is still in a mixed or interim functional level, users ability to connect to a remote access server is controlled by the user properties in Active Directory. You can see this from the interface which allows you to configure Allow or Deny permissions, as shown in Figure 3.
Figure 3. A mixed or interim level domain only provides
remote access permissions for Allow and Deny
When the domain is moved to native functional level, remote access is shifted from the user properties in Active Directory to the Remote Access Policies stored in the Remote Access Service configuration. The Remote Access Policies control access through conditions and a profile, as you can see in Figure 4.
Figure 4. Remote Access Policies control access in a
native functional level domain
When you install your Active Directory domain, you already have a domain functional level selected for you. From there, you can either keep the domain at that level, or increase the level to gain additional features. If you want to increase the size of the Active Directory database to support all of the users, groups, computers, group policy objects, etc, you will need to move to native functional level. Also, control of groups and remote access permissions differs greatly from the different domain levels.
Before you make your move, be sure to research and test what it will do to your environment, since the move is one directional. There's no easy "undo" feature.
Derek Melber (MCSE, MVP, CISM) is president of BrainCore.Net AZ, Inc., as well as an independent consultant and speaker, as well as author of many IT books. Derek educates and evangelizes Microsoft technology, focusing on Active Directory, Group Policy, security and desktop management. As one of only 8 MVPs in the world on Group Policy, Derek’s company is often called upon to develop end-to-end solutions regarding Group Policy for companies. Derek is the author of the The Group Policy Resource Kit by MSPress, which is the defacto book on the subject.
|
OPCFW_CODE
|
Pair programming driven by programming language generation
We’re excited to bring back Transform 2022 in person on July 19 and virtually from July 20-28. Join leaders in AI and data for in-depth discussions and exciting networking opportunities. Register today!
As artificial intelligence expands its horizons and innovates, it increasingly challenges people’s imaginations of opening up new frontiers. As new algorithms or models help solve an increasing number and types of business problems, advances in natural language processing (NLP) and language models have programmers thinking about how to revolutionize the world of programming. .
With the evolution of several programming languages, the work of a programmer has become more and more complex. While a good programmer may be able to define a good algorithm, converting it into a relevant programming language requires knowledge of its syntax and available libraries, which limits a programmer’s ability in various languages.
Programmers have traditionally relied on their knowledge, experience, and repositories to create these code components in all languages. IntelliSense helped them with proper syntax prompts. Advanced IntelliSense has gone one step further with syntax-based instruction completion. Google code search (code) / GitHub code search even listed similar code snippets, but the burden of tracing the right pieces of code or scripting the code from scratch, composing them together, and then contextualizing them according to a specific need rests solely on the shoulders of the programmers. .
We are now seeing the evolution of intelligent systems that can understand the purpose of an atomic task, understand the context, and generate the appropriate code in the required language. This generation of contextual and relevant code can only happen when there is a good understanding of programming languages and natural language. Algorithms can now understand these nuances across languages, opening up a range of possibilities:
- Code conversion: understand the code of a language and generate an equivalent code in another language.
- Code Documentation: generate the textual representation of a given piece of code.
- Code generation: Generate proper code based on textual input.
- Validation of codes: validate the alignment of the code on the given specification.
The evolution of code conversion is best understood when looking at Google Translate, which we use quite frequently for natural language translations. Google Translate learned the nuances of translation from a large corpus of parallel datasets – the source language statements and their equivalent target language statements – unlike traditional systems, which relied on translation rules between source and target languages.
Since it is easier to collect data than to write rules, Google Translate has adapted to translate between more than 100 natural languages. Neural Machine Translation (NMT), a type of machine learning model, has enabled Google Translate learn from a huge dataset of translation pairs. The efficiency of Google Translate inspired the first generation of machine learning-based programming language translators to adopt NMT. But the success of NMT-based programming language translators has been limited due to the unavailability of large-scale parallel datasets (supervised learning) in programming languages.
This has given rise to unsupervised machine translation models that leverage a large-scale monolingual codebase available in the public domain. These models learn monolingual code of the source programming language, then monolingual code of the target programming language, and then are equipped to translate the code from the source to the target. Facebook’s TransCoder, built on this approach, is an unsupervised machine translation model that was trained on multiple monolingual codebases from open source GitHub projects and can efficiently translate functions between C++, Java, and Python.
Code generation is currently evolving under different avatars – as a simple code generator or as a paired programmer automatically completing a developer’s code.
The key technique used in NLP models is transfer learning, which involves pre-training the models on large volumes of data and then fine-tuning them based on targeted constrained data sets. These are largely based on recurrent neural networks. Recently, models based on the Transformer architecture have proven to be more efficient because they lend themselves to parallelization, speeding up computation. The refined models for programming language generation can then be deployed for various coding tasks, including code generation and unit test script generation for code validation.
We can also reverse this approach by applying the same algorithms to understand the code to generate relevant documentation. Traditional documentation systems focus on translating legacy code into English, line by line, giving us pseudo-code. But this new approach can help summarize code modules into comprehensive code documentation.
The programming language generation models available today are CodeBERT, CuBERT, GraphCodeBERT, CodeT5, PLBART, CodeGPT, CodeParrot, GPT-Neo, GPT-J, GPT-NeoX, Codex, etc.
DeepMind’s AlphaCode goes one step further by generating multiple code samples for the given descriptions while ensuring the validation of the given test conditions.
Code completion follows the same approach as Gmail Smart Compose. So much experienced, Smart Compose prompts the user with real-time contextual suggestions, helping to compose emails faster. This is basically powered by a neural language model that was trained on a large volume of emails from the Gmail domain.
Extending the same to the domain of programming, a model that can predict the next set of lines in a program based on the last lines of code is an ideal pair programmer. This dramatically speeds up the development lifecycle, improves developer productivity, and ensures better code quality.
CoPilot can not only auto-complete blocks of code, but can also modify or insert content into existing code, making it a very powerful pair programmer with refactoring capabilities. CoPilot is powered by Codex, which has trained billions of parameters with bulk code from public repositories, including Github.
A key point to note is that we are probably in a transitional phase with pair programming essentially operating in the human-in-the-loop approach, which in itself is an important step. But the final destination is undoubtedly autonomous code generation. The evolution of AI models that evoke trust and responsibility will define this journey, however.
Generating code for complex scenarios that require more problem solving and logical reasoning remains a challenge, as it can justify generating code never encountered before.
Understanding the current context to generate the appropriate code is limited by the size of the model’s context window. The current set of programming language models supports a context size of 2048 tokens; The Codex supports 4,096 tokens. Samples in few-hit training models consume a portion of these tokens and only the remaining tokens are available for developer input and model-generated output, while zero-hit/fine-tune training models reserve the entire context window for entry and exit. .
Most language models are computationally intensive because they are built on billions of parameters. Their adoption in different business contexts could place an increased demand on compute budgets. Currently, there is a lot of focus on optimizing these models to enable easier adoption.
For these code generation models to work in paired programming mode, the inference time of these models must be shorter so that their predictions are returned to the developers in their IDE in less than 0.1 seconds to make one seamless experience.
Kamalkumar Rathinasamy leads the machine learning-based machine programming group at Infosys, focusing on building machine learning models to augment coding tasks.
Vamsi Krishna Oruganti is an automation enthusiast and leads the deployment of AI and automation solutions for financial services customers at Infosys.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.
If you want to learn more about cutting-edge insights and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider writing your own article!
Learn more about DataDecisionMakers
|
OPCFW_CODE
|
Bing Chat uses a combination of ChatGPT (an AI language model developed by Open AI) and the Bing search engine results to provide a user with a summary of the best information on the SERP. Through a series of algorithmic processes, Bing Chat can serve users the information they need quickly. This removes the need to bounce around the search results to find what they are looking for. Bing Chat is conversational, and users can ask additional questions in Bing Chat until they find the answer they need or the website that best serves their query.
Microsoft (the owner of Bing) describes Bing Chat as “your AI-powered co-pilot for the web”. When Microsoft launched Bing Chat, they described it as an “Answer Engine”.
In this article, you will get an overview of the process Bing uses to generate answers in their ChatGPT SERP Feature (Bing Chat), and also two helpful analogies that will help you understand how to approach optimising your content for Chat Features in Google, Bing and other Search Engines.
Bing Chat and Knowledge Panels save time for users
Yusuf Mehdi, Corporate Vice President, Modern Life, Search, and Devices at Microsoft states that most website visits result in bounces. Bounce rates are significantly higher on informational queries as users bounce back and forth to gather snippets of information from different parts of the search results to get the answer they need.
Yusuf Mehdi talks about the suggestions on the left-hand side of the SERP and the answers on the right-hand side SERP. At Kalicube, we refer to the Left and Right Rails of the SERP when working with clients.
According to Mehdi, the answers on the right-hand side (including Bing Chat and Knowledge Panels – if they are triggered by the query) are there to save the user time.
Knowledge Panels and the Bing Chat feature aim to improve the user experience by minimising the bouncing between search results to get the information. They provide a summary of the information in the pages behind the SERP in order to save the user the time and effort of visiting multiple pages and collating the information manually.
Once again, this fits with the stated aim of both Bing and Google: to get the user to the correct answer to their question (or the best solution to their problem) as efficiently as possible.
But not all queries in the Bing search bar result in a ChatGPT SERP Feature in the Bing search results.
What Search Queries Trigger the ChatGPT SERP Feature on Bing?
The ChatGPT Box doesn’t appear for every search on Bing. It will appear on the right-hand side of Bing’s search results on the Desktop when Bing’s Whole Page Algorithm considers that the intent of the search query is either informational, research intent or appropriate for a potential question and answer session.
There are multiple types of queries that fulfil these criteria.
- Explicit questions such as “What is Kalicube” invite questions and answers.
- Implicit questions such as “Kalicube Services” and even Entity Names like “Kalicube” can also trigger a ChatGPT box.
What is the Process for Bing Chat ChatGPT to Create the Answers it Provides?
Bing Chat works with any query – explicit questions you ask, and any other input like statements and one-word queries. When the input is not an explicit question, Bing will treat it as an implicit question and answer what it considers the most probable intent.
Here is how Fabrice Canel, Principal Program Manager at Bing presents it:
Bing takes the input, and Bing Orchestrator uses Natural Language Processing (NLP) to understand the intent, then conducts a normal Bing search behind the scenes.
When the answer to the question is unclear, ambiguous or complex, it will make two, three and sometimes as many as four searches like this:
Behind the scenes (at the bottom of the Prometheus Model image above), the algorithms conduct an analysis of the content of the pages behind the search results using Next-Generation GPT. They are leveraging their snippet algorithms to bring together advanced NLP, data from their Large Language Models and factual information from their Knowledge Graphs.
The answer is a rewritten summary of the best answers it has found in those search results. The words are close variants of, or the exact text provided by the websites and the whole answer will generally contain chunks from multiple sources.
If you are into Featured Snippets, then you can see this as a “Frankenstein’s Monster Featured Snippet”, as we’ll see.
If you are more interested in Knowledge Panels, like me, you can see this as a dynamic Knowledge Panel.
Pro Tip: The Advanced GPT can return original text without citing sources.
Looking at Bing Chat as Featured Snippets or Knowledge Panels
The Micro Featured Snippets Approach
Since it is stitching together several sentences from multiple web pages, the functionality on Bing is like a series of mini Featured Snippets pieced together. This isn’t a huge leap since Bing and Google already take different parts of a webpage and stitch them together to create a Featured Snippet (the Q&A in Bing-speak).
Strategies you use for Featured Snippets will work here, but you need to think more micro. Rather than headings, paragraphs, tables and lists, you need to think in terms of headings, individual sentences, table rows and individual items in lists.
But there’s more. You can take this to the next level.
Three years ago, Ali Alvi (head of Q&A / Featured Snippets at Bing) told me that his algorithms did NOT use the Knowledge Graph. At the time, the algorithms made the best guess of the implicit question that the passage of content answers without context beyond the page itself.
The approach remains the same, however Fabrice Canel (Prinicpal Program Manager at Bing) confirmed to me that, thanks to ChatGPT and the huge advances in Microsoft’s own Machine Learning, Bing’s new Chat Answer Engine does look at context beyond each webpage and does double check facts using thier Knowledge Graph.
Bing has improved its algorithms and because Bing Chat uses ChatGPT, we can leverage the way we present our information so it satisfies the Q&A approach to Micro Featured Snippets in Bing Chat.
The Dynamic Knowledge Panel Approach
Thanks to Fabrice Canel, Principal Program Manager at Bing, we know that Bing is referencing its Knowledge Graph. In the results, we can clearly see this since Bing Chat cites its sources, but sometimes information comes from their Knowledge Graph because some of the answer is not present in the web pages it is citing.
Incorrect information has caused embarrassment but it’s important to remember the Knowledge Graph is not exhaustive – probably only a few hundred thousand billions of facts in a world that contains facts that can be counted in multiples of a googolplex. And googolplex, that is a “thing” – read about Googleplex here.
Back to Knowledge Panels. A Knowledge Panel on Bing or Google serves a specific purpose. For a search query about an Entity, the Knowledge Panel exists to provide a convenient structured summary of the factual information from the search results with the aim of saving the user the time and energy of clicking on multiple links.
That sounds very much like the Answer Engine Chat feature on Microsoft Bing:
It is there to provide a convenient summary of the factual information from the search results with the aim of saving the user the time and energy of clicking on multiple links to gather the information themselves.
The Knowledge Panel also has a fact-checking system based on the Knowledge Graph to (generally) safeguard factual accuracy. So does Bing Chat. And so will Bard on Google.
Pro Tip: Managing your presence in Knowledge Graphs of Microsoft, Google, Apple, Amazon and other big tech companies is crucial.
Features That Don’t Fit Into Featured Snippets or Knowledge Panels
Bing Chat can display the references it uses, allowing the user to research individual elements in the answer further in the specific content it was taken from.
And, of course, it allows for an interactive exchange of questions and answers where the previous exchanges create a context that affects future exchanges.
The suggested questions are driven by the Engine’s understanding of the entity or topic. In this case, Bing Chat associates “Kalicube” with Brand SERPs and Knowledge Panels. Importantly, with the right strategy (see below), brands can feed the algorithms with questions. For example, in this example Bing Chat suggests “What is Brand SERP?” and “What is Knowledge Panel Management?”, both of which are terms we coined and questions only we answer. Both questions are valuable to us as a company because the bring the user down the funnel.
Google’s generative AI in Search Works on the same approach
Read more about Google’s Generative AI in Search here >>
Kalicube can Entrench You or Your brand in ChatGPT, Bard and Google’s Generative AI in Search
All of this is hugely good news for Kalicube. The Kalicube Process and Kalicube Pro are designed to “educate” the Knowledge Algorithms of Google, Bing, Apple, Amazon and other companies using Knowledge Algorithms.
Using the Kalicube Process, any company or person can have a Knowledge Panel which means that they will be listed in the Knowledge Graphs used by Google and Bing, which is a huge advantage for “ranking” in their AI-driven chatbots.
We help Google and Bing understand the facts about your cornerstone Entity: the company, major products, and people. Once they understand and are confident about the facts surrounding your Entity, feeding the algorithms with additional related information is significantly easier.
This means your brand earns a place in the Knowledge Graph and is a trusted fact the engines can serve confidently – both in a Knowledge Panel and in the Bard and Bing Chat results. Our approach is a future-proof approach to your digital presence across search.
For the last 10 years at Kalicube, we have perfected the strategies you can use to educate Knowledge Algorithms about the facts about entities.
We do this through–
- Clear explanations that are easy for the Knowledge Algorithms to consume, process and digest;
- Proactively correcting corroborative information to ensure consistency of information across the web;
- Carefully building sources the algorithms’ trust BUT are owned and controlled by our clients (publishers and authors – think E-E-A-T on steroids).
We support this approach with Schema Markup. Schema Markup is not a standalone strategy as many people believe it to be. We design our Schema.org Markup specifically for educating these algorithms.
|
OPCFW_CODE
|
It looks like you are asking what order rows will show up on retrieval when two rows with a timestamp cluster key are written to a table. If that is the case then the answer is the rows being returned will be in timestamp order. On the other hand, if you are asking which of two rows with identical keys are going to end up in the table?
Most of the time the timestamp used is supplied by the Cassandra coordinator node receiving the request but the application can supply its own. And how expensive?
- Consejería bíblica (Spanish Edition)!
- Delirium (Amor-Trilogie) (German Edition);
- The Stories of Haven: Reaperville.
Cassandra is designed from the top down to avoid doing updates. There are no atomic or lock operations in Cassandra. If you were to try to do a simple update in Cassandra without counters or lightweight transactions you would read a column value and then write back the updated version. Because there are no locks or atomic operations there would be no guarantee it would work correctly.
- Cassandra Design Patterns.
- Recent Popular Posts?
- A Really Bad Hair Day: The Return of Magic Plague!
- Robert Crumb - The Pocket Essential Guide!
- Architects of Poverty: Why African Capitalism Needs Changing!
But it would be fairly fast. With PAXOS which is used in both Lightweight transactions which are not transactions at all Cassandra has to do several round trips to all available nodes containing replicates to complete the operation. Its best to think of an LWT or Counter operation being about 6 times as expensive as the basic update I mentioned above. It is okay to use counters or LWT operations when they are a relatively small part of your workload. It is also important to read about how both work in detail before using them to avoid getting surprised.
This link does a decent job of explaining how lightweight transactions work in Cassandra.
Counters are essentially a special case of LWT. Thanks for writing it! Say you want to store a lot of events, but you also want to filter by special type, and date, and the search by multiple columns, and also order by multiple columns.. Your email address will not be published. Other brands, product and company names on this website may be trademarks or registered trademarks of Pythian or of third parties. Use of trademarks without permission is strictly prohibited.
Official Pythian Blog. Data Warehouse Migrations. Advanced Analytics Services. Analytics Strategy and Planning Services. Operational Data Services Database Consulting. Database Managed Services. Database Migrations. Database Troubleshooting and Performance Tuning. Remote DBA Services. Cloud Services Cloud Managed Services.
Export all data from cassandra
Cloud Strategy Consulting. Cloud Migration Services. DevOps Consulting Services. Managed Kubernetes. Technologies Analytics Hadoop. AWS Redshift. Google BigQuery. Oracle EBS. Partners Amazon Web Services. Google Cloud Plarform. Microsoft Azure. Resources White Papers. Data Sheets. About About Pythian Leadership Team. Board of Directors. Social Responsibility.
Tags: Cassandra , Data model , Distributed datbase , Partition key.
Cassandra Use Cases: When To Use and When Not To Use Cassandra
Introduction I have a database server that has these features: High available by design. Can be globally distributed. Allows applications to write to any node anywhere, anytime. Linearly scalable by simply adding more nodes to the cluster. Automatic workload and data balancing. A query language that looks a lot like SQL.
Managed Dependencies (55)
I like, and often promote Cassandra to my customers—for the right use cases. Where Cassandra users go wrong Cassandra projects tend to fail as a result of one or more of these reasons: The wrong Cassandra features were used. The use case was totally wrong for Cassandra. The data modeling was not done properly.
Apache Cassandra and ALLOW FILTERING
Features leading one to believe you can do some of the things everyone expects a relational database to do: Secondary indexes: They have their uses but not as an alternative access path into a table. Counters: They work most of the time, but they are very expensive and should not be used very often. Light weight transactions: They are not transactions nor are they light weight. Batches: Sending a bunch of operations to the server at one time is usually good, saves network time, right? Well in the case of Cassandra not so much.
Materialized views: I got taken in on this one.
|
OPCFW_CODE
|
package com.jasonfitch.test;
import com.caucho.hessian.io.Hessian2Input;
import com.caucho.hessian.io.Hessian2Output;
import com.caucho.hessian.io.HessianInput;
import com.caucho.hessian.io.HessianOutput;
import com.caucho.hessian.io.SerializerFactory;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.Serializable;
public class SerializingUtils {
public static void testHessian(Serializable serializable, SerializerFactory serializerFactory) throws IOException {
System.out.println("############com.jasonfitch.test.SerializingUtils.testHessian################");
ByteArrayOutputStream bout = new ByteArrayOutputStream();
HessianOutput hout = new HessianOutput(bout);
hout.setSerializerFactory(serializerFactory);
hout.writeObject(serializable);
hout.flush();
byte[] body = bout.toByteArray();
ByteArrayInputStream input = new ByteArrayInputStream(body, 0, body.length);
HessianInput hin = new HessianInput(input);
hin.setSerializerFactory(serializerFactory);
Serializable copy = (Serializable) hin.readObject();
compareResult(serializable, copy);
}
public static void testHessian2(Serializable serializable, SerializerFactory serializerFactory) throws IOException {
System.out.println("############com.jasonfitch.test.SerializingUtils.testHessian2################");
ByteArrayOutputStream bout = new ByteArrayOutputStream();
Hessian2Output h2out = new Hessian2Output(bout);
h2out.setSerializerFactory(serializerFactory);
h2out.writeObject(serializable);
h2out.flush();
byte[] body = bout.toByteArray();
ByteArrayInputStream input = new ByteArrayInputStream(body, 0, body.length);
Hessian2Input h2in = new Hessian2Input(input);
h2in.setSerializerFactory(serializerFactory);
Serializable copy = (Serializable) h2in.readObject();
compareResult(serializable, copy);
}
public static void compareResult(Serializable serializable, Serializable copy) {
System.out.println();
if (serializable == null || copy == null) {
System.out.println(serializable);
System.out.println(copy);
System.out.println("serializable == copy:" + (serializable == copy));
} else {
System.out.println(serializable.toString());
System.out.println(copy.toString());
System.out.println("serializable.toString().equals(copy.toString()):" + serializable.toString().equals(copy.toString()));
}
}
}
|
STACK_EDU
|
How do I attach a security group to my Elastic Load Balancer?
Last updated: 2022-08-03
How do I configure and attach a security group to my Elastic Load Balancing (ELB) load balancer?
Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.
If you're using an Application Load Balancer, follow the instructions at Security groups for your Application Load Balancer.
If you're using a Network Load Balancer, update the security groups for your target instances because Network Load Balancers don't have associated security groups.
If your target type is an IP and the target group protocol is TCP/TLS/UDP/TCP_UDP - TCP/TLS, then the protocol defaults to load balancer private IP as the source IP. This means that it's a best practice to allowlist load balancer private IPs on your target security group. UDP/TCP_UDP defaults to preserve client IP addresses. This means that it's a best practice to allowlist client IPs in the target security group.
Note: If preserve client IP isn't activated and the target security groups are allow-listing load balancer private IPs, then you are allowing all incoming traffic to access your service. If your service is designed to be access-restricted to specific CIDR ranges, then use network access control list (network ACL) to allow-list specific CIDRs and deny the rest. Or, you can activate client IP preservation and set up restrictions on the target security group as discussed later.
- If your target type is an instance and the target group protocol is TCP/ TLS/ UDP/TCP_UDP, then the default behavior of the Network Load Balancer is to preserve the client IP address. If the client IP preservation setting remains at the default value, then it's a best practice to allowlist client IP addresses on your target security group.
As needed, you can change the default client IP preservation behavior for TCP/TLS target groups by setting a target group attribute "preserve_client_ip.enabled." It's not possible to change this behavior for UDP/TCP_UDP protocol target groups. Depending on whether client IP preservation is active or not active (based on your configuration choices), it's a best practice to adjust the IP CIDRs allow-listed on the target security groups. If the client IP preservation is activated, then it's a best practice to allowlist client IP addresses. If it's not active, then it's a best practice to allowlist load balancer private IP addresses. For more information on client IP preservation behavior of the Network Load Balancer, see Target groups for your Network Load Balancers.
Note: Make sure that you associate at least one security group with each Classic or Application Load Balancer, and that the security group allows connections between the load balancer and associated backend instances.
|
OPCFW_CODE
|
How do I show the menu bar on Android?
How do I show the menu bar on Android?
This is going to be in res/menu/main_menu .
- Right click the res folder and choose New > Android Resource File.
- Type main_menu for the File name.
- Choose Menu for the Resource type.
What is Android option menu?
Android Option Menus are the primary menus of android. They can be used for settings, search, delete item etc. When and how this item should appear as an action item in the app bar is decided by the Show Action attribute.
What are menu options?
The options menu is where you should include actions and other options that are relevant to the current activity context, such as “Search,” “Compose email,” and “Settings.”
What are the different parts of menu bar?
Menu Bar Components
|File||Commands for disk operations, including creating, loading, and saving Katana projects.|
|Edit||Undo, redo, and preferences.|
|Render||Rendering the output.|
|Util||A group of miscellaneous menu items including farm management and cache handling.|
What is a menu explain with example?
A menu is a set of options presented to the user of a computer application to help the user find information or execute a program function. Menus are common in graphical user interfaces ( GUI s) such as Windows or the Mac OS . Menus are also employed in some speech recognition programs.
How to create options menu in Android apps?
You can also add menu items using add() and retrieve items with findItem() to revise their properties with MenuItem APIs. If you’ve developed your application for Android 2.3.x and lower, the system calls onCreateOptionsMenu() to create the options menu when the user opens the menu for the first time.
How do I create a floating context menu in Android?
To provide a floating context menu: Register the View to which the context menu should be associated by calling registerForContextMenu() and pass it the View. Implement the onCreateContextMenu() method in your Activity or Fragment. Implement onContextItemSelected().
What is onprepareoptionsmenu () in Android?
On Android 2.3.x and lower, the system calls onPrepareOptionsMenu () each time the user opens the options menu (presses the Menu button). On Android 3.0 and higher, the options menu is considered to always be open when menu items are presented in the app bar.
How to add menu items to the toolbar bar using Android fragments?
Fragments can also contribute entries to the toolbar bar. To do this, call setHasOptionsMenu (true) in the onCreate () method of the fragment. The Android framework calls in this case the onCreateOptionsMenu () method in the fragment class. Here the fragment can adds menu items to the toolbar.
How do you display an options menu in Android explain?
In android, to define options menu, we need to create a new folder menu inside of our project resource directory (res/menu/) and add a new XML (options_menu. xml) file to build the menu. Now open newly created xml (options_menu. xml) file and write the code like as shown below.
Where can I find Android Easter eggs?
To access the Android 12 Easter egg, go to Settings > About Phone. Tap on Android version repeatedly until a clock appears.
Where is the menu on Google Play on Android?
The Play Store menu icon (at the upper-left corner of the screen) accesses the Play Store Options menu. The options vary depending on the screen in which it was accessed. Options can include: Store home, My apps, Shop apps, Shop music, My wishlist, People, Redeem, Settings, and Help.
Does Android 9 have a hidden game?
The famous Flappy Bird (technically Flappy Droid) game is still around in Android 9.0 Pie. First introduced in 5.0 Lollipop, the game was originally the version number easter egg for the new Android update. But after Android Marshmallow, Google began to hide it from its usual location, and Pie continues this tradition.
What are different types of menus in Android?
There are three types of menus in Android: Popup, Contextual and Options. Each one has a specific use case and code that goes along with it. To learn how to use them, read on. As shown in the code snippet above, each menu item has various attributes associated with it.
What are different types of menu?
There are 5 fundamental types of menus that are used in restaurants, and they are the most commonly used. These are a la carte, static, du jour, cycle, and fixed menus.
What is a menu list out the types of menus?
How do I add a menu to an Android activity?
For all menu types, Android provides a standard XML format to define menu items. Instead of building a menu in your activity’s code, you should define a menu and all its items in an XML menu resource. You can then inflate the menu resource (load it as a Menu object) in your activity or fragment.
How many items are in an icon menu on Android?
When opened, the first visible portion is the icon menu, which holds up to six menu items. If your menu includes more than six items, Android places the sixth item and the rest into the overflow menu, which the user can open by selecting More.
What is an Android menus?
Menus are a common user interface component in many types of applications. To provide a familiar and consistent user experience, you should use the Menu APIs to present user actions and other options in your activities. Beginning with Android 3.0 (API level 11), Android-powered devices are no longer required to provide a dedicated Menu button.
Where can I find the options menu on my Android application?
If you’ve developed your application for Android 2.3.x (API level 10) or lower, the contents of your options menu appear at the bottom of the screen when the user presses the Menu button, as shown in figure 1.
|
OPCFW_CODE
|
accessing properties of a class and printing them
I have a header file like this:
@class NSMutableDictionary, NSString;
@interface randomclassname : NSObject
{
unsigned long long _HTTPMethod;
NSString *_path;
unsigned long long _apiVersion;
NSMutableDictionary *_parameters;
NSMutableDictionary *_defaultParameters;
NSMutableDictionary *_headers;
_Bool _isSigned;
}
/// methods are down here
+ (id)builderWithHTTPMethod:(unsigned long long)arg1 path:(id)arg2;
I want to access and print NSMutableDictionary *_defaultParameters;
and
unsigned long long _apiVersion;
The follow properties inside my method object.
+ (id)builderWithHTTPMethod:(unsigned long long)arg1 format:(id)arg2
{
**access those properties here and print them on NSlog.**
return %orig;
}
Please feel free to correct me if I'm wrong. I'm not entirely sure if stuff inside @interface are called properties; I'm guessing. But that's what I'm trying to access.
declare _defaultParameters as @property
What would the syntax look like
First you need to add your _defaultParametersand your apiVersion as property in your .h file
@property NSMutableDictionary *defaultParameters;
@property (assign) unsigned long long _apiVersion;
after that you can cast in your method the param id as your randomclassname
+ (id)builderWithHTTPMethod:(unsigned long long)arg1 format:(id)arg2
{
//**access those properties here and print them on NSlog.**
randomclassname *obj = (randomclassname*)arg2;
NSLog(@"%@",obj.defaultParameters);
return %orig;
}
There are no others ways than setting the @property into my .h file? This is class-dumped h file, not sure if I can alter it.
can you show an example or even a psuedo code for it.
your builderWithHTTPMethod is inside of your class? @Dilli
I can share the header file with you. I renamed it to builderWithHTTPMethod https://pastebin.com/Ck1L8MSU builderWithHTTPmethod is called IGAPIRequestBuilder pastebin has the header file. @reinier-melian
@Dilli sorry but I can understand why you can't add those private variables as properties? can you explain a little more about of what are you need to do?
I'm trying to learn reverse engineering. Those headers files are class-dumped, not sure how I would go on about using the updated headers with instagram again. If you can show me an example of class extension with the pastebin I provided; that would help.
You are saying that you are decompiling and changing and after that compiling again? @Dilli
Let us continue this discussion in chat.
|
STACK_EXCHANGE
|
After a deployment of SAS High Performance Analytics (HPA) Server with In-Database Technologies and LASR on top of a large Hadoop cluster, your customer might want to capture IoT’s real time events and store them in Hadoop, in order to provide live reports and real time analytics, but also for further offline data cleaning process and HP Statistics.
In addition of its own powerful processing capabilities for real time events, SAS Event Stream Processing is a perfect fit for such use case.
This blog covers Architecture key considerations on such "use cases" and also implementation guidelines.
Note: SAS HPA, LASR and VA/VS are not mandatory for the ESP-Hadoop integration. However, combining these products is a perfect mix for IoT/Big Data contexts. Each component can work in synergy with the others for the greatest benefit of the customer.
The real time events can be processed at high rates in ESP and are filtered out, scanned by ESP sophisticated rules to detect any required pattern.
We can have various type of subscribers: Event Streamviewer application for real time dashboards, Message buses to propagate key information to downstream applications, LASR cluster for near-real time reports, HDFS for further computation in Hadoop, etc…
Once the "events of interest" have been captured by ESP, they can be stored (with the associated information) either in Hadoop for further analysis, data management operations or in LASR for near real time analytics or visualization.
BABAR is our "SAS HPA 9.4 Server on Hadoop" example collection. We will use this collection of machines to describe how the ESP components could be deployed in the existing HPA on Hadoop Architecture. The ESP components are represented in orange.
A simplistic ESP model, called “writeHDFS” has been created in the Babar collection to demonstrate how ESP windows can be subscribed from the Hadoop namenode in order to store events in HDFS. The model collects trading events and filters transactions above a certain amount. The “big” transactions are stored in HDFS for further analysis.
The first thing is to start the ESP server with our model, for example:
There is nothing "Hadoop specific" in the model, it is simply collecting events and performs a real time computation and filter for the events streaming in. Nevertheless, we will have an adaptor running on a remote machine subscribing to this ESP window and storing the collected outputs in HDFS files. In our example the ESP HDFS Adapter will run on the Hadoop namenode (But it could run on the ESP server or on any Hadoop “EDGE” node having write access to the HDFS layer of our cluster).
First it is necessary, on the machine running the HDFS adapter, to set the DFESP_HDFS_JARS environment variable with the required Hadoop jar files:
Actually this is the trickiest part of the setup: you will need to adjust the name of the required jars files to your Hadoop distribution and release. The Hadoop jars configuration above was tested in our Babar test collection (Hortonworks HDP 2.2), depending on your own Hadoop environment, you will have to find the appropriate location of those jars and directory. Once the environment has been set for the user running the ESP Adapter, we can start the HDFS adapter with the command below:
To see files created in HDFS, open the NameNode UI and browse the Hadoop filesystem. HDFS files of 20 MB each are created in real time.
Finally you can use the HDMD engine from SAS to see the collection of HDFS files as a single SAS table. For example, the program below will create the necessary metadata in HDFS to do so :
The "big Transactions table" can now be opened any time with the latest big transactions.
Note: You could also do the equivalent in HIVE, referencing the files as a HIVE table (see "LOAD" command).
In this article we have illustrated a way to interface ESP with HDFS.
However in the new version of ESP, new Hadoop integration perspectives will be offered with Apache NiFi. Apache NiFi is a new dataflow framework for Hadoop data. More generally it is a system to automate the flows of data between various systems via “Processors”: HDFS/HBASE, SerDe files, Events Hubs, JMS, NoSQL databases (Cassandra, Mongo), Kafka, SFTP, S3 storage, Twitter feeds, etc...
A special .nar file (zipped container file that holds multiple JPEG files and a .xml metadata file) will be provided as part of ESP 4.1. Once deployed in your NiFi setup, you will be able to design your Apache NiFi flows for integration using "ListenESP" and "PutESP" processors. For more information about SAS ESP and NiFi also check out the Mark Lochbihler’s Hortonworks blog on SAS ESP and "Hortonworks data flow" (NiFi based) integration : Moving Streaming Analytics out of the Data center
|
OPCFW_CODE
|
M: The dynamic, powerful abilities of JavaScript Style Sheets (1997) - Lammy
http://sunsite.uakom.sk/sunworldonline/swol-04-1997/swol-04-webmaster.html
R: temporallobe
This is a fascinating look back at the beginnings of CSS and JavaScript. Back
then it was the wild west and anything seemed possible. I kinda miss those
days at times. This was about the time I was in university, just learning the
very basic concepts of CSS and JavaScript. Now we have "stacks" that provide
so many layers of abstraction that hardly anyone in my field actually knows
the underlying core trinity (JavaScript, CSS, and HTML). In fact, many
seasoned web app developers I know have no clue how to construct basic tables.
I've spoken to countless developers (one in fact who has a PhD in GIS) who was
not aware that the ONLY thing browsers understand are those three things -
there is no other magic under the hood. And don't get me started on transpiled
languages like Coffeescript and TypeScript. I actually had an argument with a
developer who thought that browsers natively understood these languages. I had
to prove this by showing him in the debugger that the browser only loaded .js
files. I'm not saying these tools don't have merits, but the farther away we
get from what browsers actually use, the harder it will be to understand,
debug, and fix the products we create.
R: dragonwriter
> I've spoken to countless developers (one in fact who has a PhD in GIS) who
> was not aware that the ONLY thing browsers understand are those three things
Er, most browsers have built in handling for things other than HTML, JS, and
CSS. XML (generically), XSLT, SVG, GIF, PNG, JPEG, WOFF, PDF,...
|
HACKER_NEWS
|
Splunk is software for collecting and analyzing big data. This program provides you with an in-depth view of the progress of your business by analyzing and valorizing the Big Data generated in the company’s technological infrastructure, security systems, and business plans. For this purpose, Splunk monitors everything from user clicks to secure transactions and network activities. This powerful product provides you with valuable information from the raw data collected by the machine and thus improves business intelligence.
Splunk helps you acquire this information and share it by having various capabilities, including advanced and comprehensive search, visualization, and the use of various charts and graphs, as well as ready-made templates for predefined use cases. All you need to do is feed the raw data to the app, and let Splunk do the rest. The main mission of this program is to make organizations and businesses smarter through a comprehensive analysis of raw data and correctly display the problems and weaknesses of businesses. So far, this program has more than 10 thousand customers all over the world.
Features and Specification of Splunk software:
- Collect and index data from almost any data source
- Powerful search
- Comprehensive analysis and visualization of various data through various charts and graphs
- Simple and powerful user interface
- Ability to filter different data (suitable for large volumes of data)
- Multi-user capability of the program (several users can be defined with separate usernames and passwords and separate access levels)
- And …
Minimum supported hardware capacity
Intel Nehalem CPU or equivalent at 2 GHz, 2 GB RAM
Recommended hardware capacity
Windows platforms 2x six-core,
2+ GHz CPU, 12 GB RAM,
RAID 0 or 1+0,
with a 64 bit OS installed.
During the installation of the software, after activating the Check tick, it is necessary to enter the username and password for activation. First, you must disable the antivirus and open the crack folder and open the License Server Win64 file. The values in front of the Master UUID Copy and paste in the username field and the values in front of Pass4Symmkey in the password and Confirm password fields.
And continue the installation process until the end. Also, after installation, to access the software panel, in the address bar of the browser, search this address http://127.0.0.1:8000/en-US/account/login, and to log in, enter the same username and password as during the software installation. Enter the software you entered in this section to enter the software environment and use it.
Installation date and version of Windows
Splunk Enterprise 8.2.0 version was successfully installed and cracked on Windows 10 64-bit edition.
|
OPCFW_CODE
|
On Mon, Sep 27, 2021 at 11:45:59AM +0800, 李真能 wrote:Thanks for reminding, Xorg has the config option "PrimaryGPU", it has the same func as boot_vga.
在 2021/9/27 上午4:20, Bjorn Helgaas 写道:Yes. I think the arbiter also provides an interface for controlling
On Sun, Sep 26, 2021 at 03:15:39PM +0800, Zhenneng Li wrote:"Some "legacy" VGA devices implemented on PCI typically have the same
Add writing attribute for boot_vga sys node,When you repost this, please take a look at the git commit log history
so we can config default video display
output dynamically when there are two video
cards on a machine.
Xorg server will determine running on which
video card based on boot_vga node's value.
and make yours similar. Specifically, the subject should start with a
capital letter, and the body should be rewrapped to fill 75
Please contrast this with the existing VGA arbiter. See
Documentation/gpu/vgaarbiter.rst. It sounds like this may overlap
with the VGA arbiter functionality, so this should explain why we need
both and how they interact.
hard-decoded addresses as they did on ISA. When multiple PCI devices are
accessed at same time they need some kind of coordination. ", this is the
explain of config VGA_ARB, that is to say, some legacy vga devices need use
the same pci bus address, if user app(such as xorg) want access card A, but
card A and card B have same bus address, then VGA agaarbiter will determine
will card to be accessed.
the routing of these legacy resources.
Your patch changes the kernel's idea of the default VGA device, but
doesn't affect the resource routing, AFAICT.
And xorg will read boot_vga to determine which graphics card is the primaryDoesn't xorg also have its own mechanism for selecting which graphics
graphics output device.
device to use?
Is the point here that you want to write the sysfs file to select the
device instead of changing the xorg configuration? If it's possible
to configure xorg directly to use different devices, my inclination
would be to use that instead of doing it via sysfs.
That is the difference about boot_vga and vgaarbiter.
Signed-off-by: Zhenneng Li <lizhenneng@xxxxxxxxxx>
drivers/pci/pci-sysfs.c | 24 +++++++++++++++++++++++-
1 file changed, 23 insertions(+), 1 deletion(-)
diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
index 7bbf2673c7f2..a6ba19ce7adb 100644
@@ -664,7 +664,29 @@ static ssize_t boot_vga_show(struct device *dev, struct device_attribute *attr,
+static ssize_t boot_vga_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+ unsigned long val;
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct pci_dev *vga_dev = vga_default_device();
+ if (kstrtoul(buf, 0, &val) < 0)
+ return -EINVAL;
+ if (val != 1)
+ return -EINVAL;
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+ if (pdev != vga_dev)
+ return count;
static ssize_t pci_read_config(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr, char *buf,
No virus found
Checked by Hillstone Network AntiVirus
|
OPCFW_CODE
|
package de.fau.cs.mad.fly.levels.tutorials;
import de.fau.cs.mad.fly.I18n;
import de.fau.cs.mad.fly.features.IFeatureInit;
import de.fau.cs.mad.fly.features.overlay.InfoOverlay;
import de.fau.cs.mad.fly.game.GameController;
import de.fau.cs.mad.fly.game.GameControllerBuilder;
import de.fau.cs.mad.fly.levels.ILevel;
import de.fau.cs.mad.fly.res.GateCircuitListener;
import de.fau.cs.mad.fly.res.GateGoal;
/**
* Level script file for the tutorial to explain movement in both directions,
* points and remaining time.
*
* @author Lukas Hahmann <lukas.hahmann@gmail.com>
*
*/
public class TimePointTutorial implements ILevel, IFeatureInit, GateCircuitListener {
@Override
public void create(GameControllerBuilder builder) {
builder.addFeatureToLists(this);
}
@Override
public void init(GameController game) {
InfoOverlay.getInstance().setOverlay(I18n.tLevel("tutorial.navigate.both.directions"), 4);
}
@Override
public void onFinished() {
// do nothing
}
@Override
public void onGatePassed(GateGoal gate) {
if (gate.getGateId() == 0) {
InfoOverlay.getInstance().setOverlay(I18n.tLevel("tutorial.points.for.gates"), 3);
} else if (gate.getGateId() == 2) {
InfoOverlay.getInstance().setOverlay(I18n.tLevel("tutorial.remaining.time"), 3);
}
}
}
|
STACK_EDU
|
# Hladanie cisel
# Na vstupe získate zoznam čísel oddelených medzerou. Spočítajte a vypíšte minimum, maximum, priemer a medián. Vypíšte posledné číslo.
# Vyskoušajte i tento vstup:
# 3 9 4 8 2 1
# Sample Input:
# 15 13 7 10 11
# Sample Output:
# Min: 7.00 Max: 15.00 Avg: 11.20 Med: 11.00
# Last number: 11
moj_zoznam = input().split()
dlzka = len(moj_zoznam) # pocet cisel v zozname
for prvok in range(0, dlzka):
moj_zoznam[prvok] = int(moj_zoznam[prvok])
# moj_zoznam = [int(x) for x in moj_zoznam]
sortovany_zoznam = sorted(moj_zoznam, reverse=False) # [7, 10, 11, 13, 15]
maxi = max(moj_zoznam) # maximalne cislo
mini = min(moj_zoznam) # minimalne cislo
avg = (sum(moj_zoznam) / dlzka) # priemer
x = round(dlzka / 2) # zaokruhly
if dlzka % 2 != 0:
med = sortovany_zoznam[x]
else:
med = ((int(sortovany_zoznam[x - 1]) + int(sortovany_zoznam[x])) / 2)
posledne_cislo = moj_zoznam.pop() # posledne cislo
print('Min:', '{:.2f}'.format(mini), 'Max:', '{:.2f}'.format(maxi), 'Avg:', '{:.2f}'.format(avg), 'Med:','{:.2f}'.format(med))
print('Last number:', posledne_cislo)
#----------------------------------------------------
# import statistics
# vstup = input()
# retezce = vstup.split()
# cisla = [int(r) for r in retezce]
# minimum = min(cisla)
# maximum = max(cisla)
# prumer = statistics.mean(cisla)
# median = statistics.median(cisla)
# posledni = cisla[-1]
# print(f'Min: {minimum:.2f} Max: {maximum:.2f} Avg: {prumer:.2f} Med: {median:.2f}')
# print(f'Last number: {posledni}')
|
STACK_EDU
|
Old Krell (a Panasonic Pentium-III Toughbook laptop circa 2002) having served its various purposes but getting somewhat hoary with age, I decided to invest in a new laptop. My criteria for selecting the new laptop included: relatively lightweight, good battery life, and Linux compatibility. I was sorely tempted to buy one of the new MacBook Pros, but couldn’t really justify going with Apple when all my development recently has been on Linux. After the usual bout with Google, I found this thread from which I came away with the general idea that Lenovo (formerly IBM ThinkPads) laptops were pretty compatible with Linux and good computers for developers (maybe about 90% agreement on this, with the usual vocal minority that felt these computers were pieces of $*&@). Going to the Lenovo website, I then sought to find a match with my other requirments: small and long battery life. I came up with the model X220.
I’ve been using the machine for about a month now. SuperKrell has moved in and Krell has moved out — going to my granddaughter Samantha with a shiny new minty fresh installation of Linux Mint. I installed Ubuntu 11.04 on the Lenovo. I decreased the preinstalled Windows 7 partition to about 20 GB (damn Microsoft tax!) and left the rest for Ubuntu. Amazingly, everything works on this laptop with Ubuntu! Sleep and Hibernate work. Function key functions (volume, brightness, etc.) work. Trackpad works great, including two-fingered scrolling. The keyboard is great to type on. Wireless works without tweaking. Everything just works.
The system I got has a 2.3 GHz i5 Sandy Bridge processor, 4 GB of RAM, and a 120 GB solid state drive. Startup and shutdown are so fast with the solid state drive that there doesn’t seem much point in using sleep or hibernate. The fan is super quiet. I had to hold it right next to my ear to determine that it even has a fan. I got both a 9 cell and 6 cell battery with the unit. The 9 cell battery obviously is heavier and sticks out from the back of the laptop. I haven’t formerly tested it, but I seem to be getting about 8 hours of battery life with it. The 6 cell battery makes the unit sleeker and lighter, and I still get 4-5 hours with it. By the way the battery meter in Ubuntu seems somewhat confused by the batteries; I’m not sure how accurate the estimates are. Nevertheless, the battery life is definitely great and I am pleased with it. I use the smaller battery at home, and the bigger when I travel.
The negatives are minor. There is a nice yellow USB 3.0 port, but it only works if you get the i7 version of the laptop. The i5 is already insanely fast for my purposes; I couldn’t justify the extra expense of the i7 processor. I’ll live without the USB 3.0 port. The screen size is rather small (duh — what did I expect in a small laptop?). I am getting used to it. The main problem I have with the screen is a general problem with today’s laptops. It seems all laptops nowadays are going with 16:9 ratio screens. I am used to big square screens and like the amount of text you can display on a squarish screen. Looking at laptops on sale at Office Depot the other day, it looked to me like all of them had wide screens. Great for watching movies, but not my favorite for real work. Oh well. Finally, the touchpad is very sensitive and, with the small size of the laptop, it is easy to brush against it while typing, in the process finding out shortcut mouse commands that I never knew existed in Ubuntu, resulting in various odd and unwanted screen phenomena.
I have read that Lenovo ThinkPads have a reputation for being ugly. My X220 though is sleek and thin, with a black matte finish that is actually quite attractive. It’s also a sturdy little unit, not flimsy like some laptops I have played with. Overall a welcome addition to the computer armamentarium of EP Studios.
|
OPCFW_CODE
|
export const events = {
current: null
};
/**
* This function triggers a throttled event function with a specified delay (in milli-seconds). Events
* that are triggered repetitively due to user interaction such brush dragging might flood the library
* and invoke more renders than can be executed in time. Using this function to wrap your event
* function allows the library to smooth out the rendering by throttling events and only responding to
* the most recent event.
* @name events.trigger
* @example
* chart.on('renderlet', function(chart) {
* // smooth the rendering through event throttling
* events.trigger(function(){
* // focus some other chart to the range selected by user on this chart
* someOtherChart.focus(chart.filter());
* });
* })
* @param {Function} closure
* @param {Number} [delay]
* @return {undefined}
*/
events.trigger = function (closure, delay) {
if (!delay) {
closure();
return;
}
events.current = closure;
setTimeout(() => {
if (closure === events.current) {
closure();
}
}, delay);
};
|
STACK_EDU
|
No, it is not possible to directly connect a motherboard to a laptop. The motherboard is the main circuit board of a computer system, and it is designed to be installed inside a computer case. A laptop, on the other hand, has a specific form factor, and its components, including the motherboard, are integrated into the laptop’s chassis.
So, you might have wondered if there would be any communication between your
laptop and a separate motherboard if you run a SATA cable from the data side of the HDD port from the laptop that is powered on and connected directly to a desktop’s motherboard SATA port that is powered in and booted into Windows already.
But why would you want to attach a separate motherboard to your laptop if it’s not just for a curious experiment? There are a few reasonable reasons:
Why might you want to connect a motherboard to a laptop?
Recently, I have received more emails with questions about repairing or upgrading a laptop’s hardware, including the motherboard.
Connecting an external motherboard to a laptop is, unsurprisingly, a common topic in the PC community, and here’s why you would want to give it a shot:
- To upgrade the laptop’s hardware components (e.g., CPU, RAM, graphics card) to improve performance)
- To repair a damaged motherboard in the laptop by replacing it with a new one
- To salvage components from a desktop computer and use them to build a custom laptop
- To experiment with building a custom laptop from scratch as a hobby or project
- To test or prototype a new hardware design for a laptop
Despite these operational needs, it’s impractical to connect a motherboard to your laptop. Here’s why?
Why it’s not possible to connect a motherboard to a laptop
The form factor, component specifications, incompatibility factors, and risk of damage are the core barriers that make connecting a desktop motherboard to a laptop impractical.
Form factor difference
If you are thinking of connecting a motherboard to your laptop, you need to consider the most important hindrance, which is the differences between the laptop and desktop form factors.
Your laptop has an integrated screen, keyboard, and trackpad in one enclosure with a single housing.
On the other hand, desktops have separate components placed on top of each other or mounted on stands. So you can have towers or cubes on desktops, but not on laptops. This form factor orientation influences the compatibility of connecting to motherboards.
Also, desktop computers can come in different sizes depending on how many components you want to fit inside the chassis; however, they still use more space than laptops.
Laptop components are designed to fit specific models.
Laptop components are designed to fit specific models for a variety of reasons, including:
Size and shape: Laptop components need to be designed to fit within the limited space of the laptop’s chassis. Each laptop model has a specific size and shape, and the components must be designed accordingly to fit into the available space.
Power requirements: Laptop components are designed to work within specific power limits to ensure that they can be powered by the laptop’s battery and charging system. Components — the external motherboard in this case — that require too much power may not be compatible with a specific laptop model.
Thermal design: Laptop components generate heat during operation, and the laptop’s thermal design is specifically tailored to handle the amount of heat generated by the components. If you are about connecting a motherboard that
If it generates too much heat or is not compatible with the laptop’s thermal design, it may cause overheating and other issues.
Connectivity: Laptop components are designed to connect to the laptop’s motherboard and other components in specific ways. Each laptop model has a unique motherboard design and component layout, and the components must be designed to match these specifications. So, there are a lot of connectivity considerations before you can decide to use an external motherboard for your laptop.
In essence, laptop components are specifically designed to fit within the limitations and requirements of a specific laptop model. So, attempting to use components from another model or a desktop computer—the motherboard, in this case — could lead to compatibility issues or even damage to the laptop.
Challenges in trying to connect incompatible hardware
Another barrier to connecting the motherboard to the laptop is the incompatible hardware between the two units.
You can interconnect two computers using Ethernet crossover cables like USB 3.0 cables, but you have to be sure of what is involved before using them.
For example, if your desktop SAS RAID array is faster than USB 3.0’s maximum transfer speed, SATA III won’t help you much, and you can get 10 GB Ethernet cards to handle your data transfers.
The point is that the hardware specifications of desktop computers are different from laptops, and this relates to motherboard connectivity and compatibility.
Risk of damage
The motherboards integrated into compatible laptops are there for a reason. So attempting such a connection is not recommended and could even cause damage to the laptop.
If you find a way around it, electrical conflicts could damage several hardware components and destroy your data.
Connecting an external motherboard unit to your laptop can sound like an interesting experiment but not worth it.
Difference between motherboards in laptops and desktop motherboards
|Factors||Laptop motherboards||Desktop motherboard|
|Form factor||Designed to fit into smaller form factors to fit within the limited space of a laptop chassis.||They are larger and have more space for expansion slots and ports.|
|Power consumption||Designed to consume less power, as laptops rely on batteries||Have more power requirements, as they are typically plugged into a power outlet.|
|Components integration||Often have components integrated into the board itself, such as the CPU, GPU, and sometimesRAM||Typically have expansion slots for additional components.|
|Cooling||Designed to operate in a confined space and with limited cooling options, often relying on heat pipes and fans.||Typically have more space for cooling components like fans, heat sinks, and liquid cooling systems.|
|Upgradability||Often difficult to upgrade or replace, as they are custom-designed for a specific laptop model||Often designed to be more easily upgradable, with interchangeable components and standardized sizes for expansion cards andRAM|
|Connectivity||Fewer ports and connectivity options due to their smaller size||More ports for peripherals like USB, Ethernet, and audio.|
What you can do instead
With all these in mind, you want to refocus your focus on salvaging the situation. Since the discussion is on motherboard replacement, it can be assumed that you’re dealing with a bad laptop motherboard that needs to be replaced.
So instead of connecting an external motherboard to your laptop, which is impractical, you can repair or upgrade the laptop’s hardware.
But troubleshooting or repairing laptop hardware components is not as easy as it may seem on desktop motherboards. Here are a few factors to ponder to that effect:
The typical laptop’s internal design has all the components soldered up. So the cost of replacing the motherboard in the laptop will be more than the value of the motherboard. So repairs will cost you more than replacements.
You might have considered connecting an external motherboard to your laptop because the device’s motherboard is damaged. This is not ideal. And if the damage is caused by physical impact (e.g., an electric short, a liquid spill, etc.), then the best shot is to replace it.
Tinkering with laptop motherboards can be a rough experience if this is the first time you’ve done something like that. There are a lot of screws, connections, and circuits to be conscious of compared to desktop motherboards, where the case unit is the singular focus.
Eventually, You’ll need to consult a technician
The risk of DIY repairs or upgrades is higher on laptop motherboards than on desktop motherboards.
When you think of the availability of new or serviceable replacement parts in your laptop — motherboard, in this case — and the cost, you’d want to leave it to a professional.
Also, each laptop has a unique component design. But many laptop motherboards have a CPU and GPU (video controller) integrated (non-removable) and soldered onto the motherboard.
So it is quite impossible to replace, and this is increasingly the case for RAM (memory).
The concept of connecting an external motherboard to your laptop sounds
impractical. Yes, you can connect an additional drive and GPU (via USB). But not an additional motherboard.
It is as if you are attempting to add an external engine to a car, but the effect in this case is worse.
What you can do, however, is replace the motherboard on your laptop. But connecting them externally is not possible and has no benefit.
If you are considering replacing your laptop’s motherboard, it can be a rough experience if you have never done anything like it previously.
Unlike on actual motherboard units where things are visible, you may need an exploded view layout of the system to ensure you know where everything goes.
If you are confident in DIYing your laptop hardware, I recommend you leverage service manuals, which are typically available online.
Keep in mind that desktop hardware is easier to access and repair, and parts are usually interchangeable, unlike laptop components.
Bottom line, I would not recommend that the casual user attempt a laptop repair — especially a motherboard-related one.
|
OPCFW_CODE
|
awesome presentation of evidence! i would remind everyone to also look around in the account usage details via the website account. ive found some shady stuff going on ! the info goes by calender date and mine goes from january 2012 to feb and then back to january!
i was told by verizon techs unless the battery was removed there could still be data transfer also there is an 80 page report by the department of justice detailing data breaches via verizons data network, also since im not an electronics expert i dont quite know how to explain it but one of the 3 devices i had to return actually functioned for several hours with the battery removed! the screen and the led remained on!
Assuming we are still talking about the MiFi 4510L:
The MiFi 4510L uses an E-Ink display. The great benfit of E-Ink is that it does not require power to provide an image. This is why E-Ink was originally designed to work with E-Readers like the non-color/non-tablet Nook and Kindle. You can go for days on a single charge where a similar color tablet can only go for hours due to normal power consumption habits of that technology. E-Ink is also easier on the eyes because it does not require a backlight to view where as a LCD/LED screen does.
You can easily reporduce this feature by pulling the battery on a MiFi while it is booted up. The screen will freeze and remain frozen for a long time until either the battery is reinserted or the device rebooted.
For more info on E-Ink:
I appreciate the explanation of what circumstances constitute "data usage" when a computer is on, however no-one seems to be able to explain how data is used when (a) the computer is off or (b) the mifi is "off" - there seems to be many many people (myself included) who have been hit with usage charges under these circumstances. Nowhere in the users manual does it say "WARNING: Turn off the unit when not in use" or does it explain how this thing works. Mine never really turned off - I shut it, it would go back on! I noticed the thing flashing even when the computer was off - didn't think much of that, becase my modem would do that. It wasnt until I got hit with a whopping $800 bill that I finally not only pulled the plug but the battery too. Apparently this thing is using data regardless if the computer connected to it is on or off. Does that make any sense??
I'll take a stab at your questions.
To understand how data is consumed, both while the devices are on and off, one needs to understand how VZW reports on data usage. Data is logged at VZW in intervals called sessions. A session can be a few minutes or many minutes long, we dont have a true definition for how they work but they appear to be dynamic. Whenever the session ends the data is sent back to Verizon for reporting. Those sessions will not be visiable to the user until after a short delay. A session does not necessarily end when the device is powered down, it may continue until it is determined to be no longer needed. Even though the session has not ended the actual data transfer should end when the device is powered down.
With that being said, VZW could give the impression that your device is consuming data while the device is powered off. However, any mismatches between session times and actual consumption should be relatively close. You shouldnt have a session sneak into your log several hours after consuming the data, it should be more like 20-30 minutes behind. When logs and device behavior do not match then VZW tends to swap out the device as defective as the solution. However, VZW will still try to stick you with the bill as we have seen in multiple threads. Most can use those log mismatches as leverage to terminate thier contract early without fees or at discounted rates.
Now, the state of being powered on and off is another topic. Each device has its own behavior and features that determines when the device is powered down, charging, powered on or anything in between. With the MiFi 4510L we need to look at the colored LED light to know what state the device is in. The MiFi is off when the LED light is off. Since E-Ink displays can be deceptive the LED light should always tell us if the device is powered on in any way. If you leave your device plugged into a wall charger then there is a posibillity that it is not truely powered down or that a step was missed while attempting to power down.
Lets stir the pot a little more...
News from the FCC seems to be fitting to this thread:
It seems the gov't is trying to regulate online data usage and monitoring. Not sure if its a solution to the problems posted here but it is a step in the right direction.
Hello and "Thank You" for your (very informative) reply! I am in dire straights right now over this (because of the dispute over the mifi usage charges, which I refused to pay, they have disconnected ALL my wireless services at the moment) I feel like I'm in a lost battle but I'm not backing down and quite frankly cannot pay $800 for less than 2 months of internet...I signed on for this to save money!
1. I got this to replace my satellite internet for my home computer. I was told it worked the same way as a modem.
I do not use it for travel, i attached it to the computer, plugged it in and that was that. (I've since read yours/other threads that this is NOT a good replacement for home internet...I based my decision on what the sales rep said)
2. Initially, I just left it connected (battery in, but also plugged in) and left it alone regardless of whether or
not the computer was on...
(I never had to shut my modem - if the computer is not on, it's not doing anything)
3. I noticed that even with the computer off (i.e. no one connected to internet), the unit would be flashing various colors. I tried turning it off as per the manual (which is VERY basic). The thing didn't shut off...even after it appeared to shut off (no lights whatsoever), it would mysteriously come back on - flashing lights, between yellow/pink...it looked like it was "doing something" even though the computer was off. I figured it must be just like a cell phone, modem, dvd recorder, or any such electronic device....that it goes into some kind of "standby" mode, but you're only using it when your talking/texting/recording, etc. I didnt give it much thought at first.
4. Untill I got the first month's bill - 37GIG OVER and the customer service rep said I was already 9GIG over and I wasn't even a week into the next billing cycle. I explained this was TOTALLY IMPOSSIBLE - I am hardly even home and/or on the computer - no-one else is here...my job is far away and I am sometimes gone for days. At that time I could count on one hand how many times I had even been on the computer, and my internet use is a little browsing, checking email, facebook, etc.
5. I had them "suspend" the service that day since I wouldnt be home for a few days...I had to unplug it and take the battery out to be sure it was off. Havent used it since. No-one could explain to me how this usage was possible, nor admit that there could be a problem with either their equipment or service. They insisted that "it was used and you have to pay".
I've gotten a different story with each person I spoke to...I have logged more than 20 phone calls and countless hours with customer service reps, tech support, and gotten nowhere, other than now I have no cell phone, no "home-phone connect" (which was supposed to replace my land line) - and each month the bill is getting higher...now they are sticking me with all kinds of cancellation fees, late fees in addition to everything for a grand total of $1500.
6. I appreciate the technical explanations of how these things work - but any advice as to how I can possibly resolve the issue of being wrongly charged for this so-called "phantom usage"? From the get-go, this service was misrepresented, and
it's scary how each sales rep/customer service/tech person gives you different information. Any advice is truly appreciated.
|
OPCFW_CODE
|
So as mentioned previously, I migrated my blog from WordPress to FunnelWeb - in this post I'm going to explain the steps I went through to get from the idea of moving the blog, to the text you're reading on your screen right now.
Getting the code
FunnelWeb is an open source project, hosted on BitBucket - so the first step was getting myeslf a copy of the code to be able to play around with. Given that I had no experience with mercurial before this, I had to do a little bit of playing around with things to get it all sorted. I began by installing TortoiseHg to give me a GUI, and then creating a clone of the repository locally on my laptop.
hg clone https://bitbucket.org/FunnelWeb/release
I did this part via the command line tool, but after a while it had downloaded my clone of the repository and I could attempt to run the tool locally
Customising my install
So before I rolled it out, I wanted to customise a few things about the app - all of which were essentially look and feel realted. To achieve this, I created myself a new theme to apply to the blog (this meant all my customisations would be in new files rather than modifying existing files, which would mean that it would be easier to merge changes in to the app later on). I created myself a new folder under src\FunnelWeb.Web\Themes called BrianFarnhill, and the added myself a style sheet, some images and then added a few views to overwrite the out of the box ones where I needed to make minor HTML changes.
Hosting and deploying
Once I was ready to try pushing it out I had a look around at hositng options, and I was recommended to look at AppHarbor. Appharbor is a great way to host .NET applications and it does some very cool stuff around running tests and deploying files to the web servers for you. So to get my code on to thier servers I needed to push it to somewhere that they could deploy from. I decided to push mine to a private repo on BitBucket (so seperate from the main FunnelWeb one that I got the code from) and have it deploy from there. There is a good guide on Appharbor about how to do this, but basically the premise is that you grant read only access to the AppHarbor account and add a POST request to the BitBucket actions for what happens on a commit. This means that when I do a push, BitBucket will let AppHarbor know and they will grab the code, build it and run tests and if the tests pass, deploy the files to the web server.
I did add the SQL Server add on to my AppHarbor app which gives me 20MB worth of database space to run the application. It's important to keep in mind that with Appharbor everytime they deploy your app they wipe out the files on the server and replace it with the build output. Having the database meant that my blog posts were protected from being wiped out - but I needed an alternative for images and attachments. From another recommendation I looked at RackSpace's CloudFiles. This was a dirt cheap way of hosting files in the cloud, and using a tool like CyberDuck it's easy to upload them and then the files are published across the Akami CDN as well. So I got that set up and moved all my attachments to that.
To get my app playing nice in both the local install and the remote AppHarbor install I needed to be able to tweak the config files use use configuration transformations. FunnelWeb puts some of its config in the "my.config" file, things like admin username, password and SQL connection string. So I created my.Debug.config and my.Release.config and used some transformations in there so that AppHarbor would be able to resolve the release settings for it's deployment and locally it would use the default settings in the my.config file. In the release file I had the following code:
<?xml version="1.0" encoding="utf-8"?> <funnelweb xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <setting key="funnelweb.configuration.database.connection" value="[my connection string]" xdt:Locator="Match(key)" xdt:Transform="Replace" /> <setting key="funnelweb.configuration.authentication.username" value="[My username]" xdt:Locator="Match(key)" xdt:Transform="Replace" /> <setting key="funnelweb.configuration.authentication.password" value="[my password]" xdt:Locator="Match(key)" xdt:Transform="Replace" /> </funnelweb>
The xdt tags are used to dictate how the tags should be merged in to the original config. Each tag I specifiy that they should replace a tag in the source, and that tag should be found by matching the 'Key' attribute.
Once I had the code running I needed to work out how I could move the content I had in my existing blog across to this new one. FunnelWeb has a BlogML import option, so I needed to get the content across. I spoke about the tool I wrote for this (called WXRtoBlogML) in my previous post so I wont elaborate on it too much.
Issues I came across
Once I got the content in to the new blog, I did still have some issues to address before going live - and these were all to do with content and stemmed from decisions I made many moons ago when working with the very first version of my blog which ran on Windows Live Spaces. To get my source code to appear formated in that blog I used a tool which generated the HTML styles needed from the source code, and it saved the HTML. This was great at the time, but when it came to using this elsewhere it was painful. I needed to manually clean up a lot of this rubbish as when I tried to run it through a couple of tools it all went a bit pear shaped. So I spent a lot of time working to manually clean up my posts and convert them all from HTML to Markdown. As part of this I cleaned up source code, uploaded images to my CloudFiles account, and corrected any tags and added introductions to each post. This was the most painful and time consuming part of the migration, and looking back I should have automated some of this, but oh well - it's done now.
The other issue I found (and in honesty was too lazy to solve) was around having AppHarbor run the unit tests for FunnelWeb - it was always failing them, and it's because there was no SQL express install for it to test those methods against, meaning that it would always fail no matter what. To get around this I basically got AppHarbor to ignore the tests all together (told you it was lazy!). If you create a solution file called AppHarbor.sln it will be the solution that gets build in place of any other solution file in the code repository. So I copied the original solution file, renamed it and then took out the test project. Cheating, I know - but it got it going and I can run the tests locally to be sure that I haven't broken anything major (I know the tests should be running in Appharbor to ensure there are no environment specific things affecting them, but whatever!)
So that's the gist of the process involved in moving from WordPress to FunnelWeb - hopefully it will help others who are thinking about making the move. I've got a bunch of ideas of things I want to do with the blog now I have control over the source code. It's also proving to be a neat way for me to start understanding ASP.NET MVC3 as well, so it's all a winner in my book!
No new comments are allowed on this post.
|
OPCFW_CODE
|
SVN status is normal instead of added
I have a question regarding svn: I added directory D1 with files F1 and F2 in feature branch. After it I'm trying to merge it back to trunk. After merge files appears in commit window, but they are marked as normal(+) and directory D1 is marked as added. After commit these files are not transfered to server, so build fails. Why these files are marked as normal instead of added?
Thanks!
Some details:
Make a branch B1 from trunk Add directory with files in branch B1
Merge all changes from trunk to branch for moment of merge Try to
merge B1 back to trunk
Merge branch to trunk back using reintegrate or "merge two different
trees" option - catch a diff between trunk and branch and put it
into trunk working copy.
Try to commit trunk
I actually would not necessarily expect any file data to be transferred for the added files, because the server should be able to just "copy" the files from the branch, by creating a link of sorts, if they are not modified. Where does the build fail? Are the files present in your working copy? Did you forget to commit the result of the merge? Your scenario says "try to commit trunk". Did you actually go through with the commit, with the new files checked for commit?
Svn must be told you want to add the files to the repository. It doesn't assume that any file under a revision controlled directory is a revision controlled file.
There are exceptions. For example, when you add a directory to the repository, the default behavior is to add the directory and everything it contains to the repository.
Now, if you did add the file, typically it is marked with a green plus. (It is confusing because you indicate it shows up as normal with a green plus sign, and that's not how a normal file shows up). If you have files that have green pluses, they are added but not committed. This means that at some point in time you did the correct add operation, but you need to commit the file back to the remote repository.
Truly normal (non revision controlled files) show up without any kind of file decoration.
Also certain ways of creating the files go around the operating system's desktop GUI event handler hooks. If you created the files using DOS utilities then the hook that TortoiseSVN uses to know the file was created will not typically be called.
Finally, the origin of the file must be accounted for to a small degree too. Dragging and dropping SVN files with TortoiseSVN is typically handled well, with the exception of dragging and dropping SVN files from two different checkouts of a repository, or two different checkouts of two different repositories. In the latter cases, the operating system's desktop GUI event handlers that TortoiseSVN implements will attempt to move the file in a SVN aware manner, but SVN itself doesn't support revision tracking across different local repositories.
It was strange for me that I've added these files in branch to the directory, and they are not displayed as added after merge, while parent directory is added...
@dbf I can't help but wonder if there is some extra, vital, piece of information that is missing to really know what is going on here. Your description is mostly about what has happened, and is very light on the details of what you did. Are you sure that what you did before checkin is described completely enough, and that it is not in any way different than what you always do? Also, I recommend an install of cygwin (to get the command line svn tool) in addition to TortiseSVN, because you get better error reporting.
Thanks, Edwin. I added some details to original post, looks like it is what I've done.
@EdwinBuck, you don't need cygwin to get the command-line tool. TortoiseSVN these days ships with a Windows version of the command-line tools.
@Ben Thank you for the note, that's really good info to know!
I still don't know how to resolve it using tortoise merge, here is a workaround that I discovered:
Merge branch to trunk using tortoise merge
Prepare patch between current state of trunk and it's previous state (commited)
Checkout trunk to separate dir, apply this patch and commit. It was required to add files manually, but at least it was possible to do it (it was not possible to add after merge).
A merge with added files should just work, adding the files to trunk as well. Applying a patch should not be needed at all, and doing so will lose history of the new files.
@Ben, what to you mean when you are talking about "a merge with added files"? As I said, these files are not available to be added using svn add (I get an error trying to add them) and it is displayed that there are no changes for these files when I open commit window.
I mean a merge from a branch, where the branch has added files. The merge operation itself should have added the files for you, and you should see those files in your working copy now. No further action should be needed during the merge, you just need to commit them.
|
STACK_EXCHANGE
|
refactor: use esbuild-plugins-node-modules-polyfill
This PR replaces @esbuild-plugins/node-globals-polyfill & @esbuild-plugins/node-modules-polyfill with the up-to-date & maintained esbuild-plugins-node-modules-polyfill
The esbuild-plugins repo itself points towards using esbuild-plugin-polyfill-node instead
https://github.com/remorses/esbuild-plugins/blob/373b44902ad3e669f7359c857de09a930ce1ce90/README.md?plain=1#L15-L16
After doing this in the Remix repo (see https://github.com/remix-run/remix/pull/5274), we got quite some new bugs, so we chose to go for @imranbarbhuiya's esbuild-plugins-node-modules-polyfill instead (see https://github.com/remix-run/remix/pull/6562), which is an up-to-date and well maintained alternative
Added benefit is that we won't get the following deprecation warnings when installing @esbuild-plugins/node-modules-polyfill:
npm WARN deprecated<EMAIL_ADDRESS>This package has been deprecated and is no longer maintained. Please use @rollup/plugin-inject.
npm WARN deprecated<EMAIL_ADDRESS>Please use @jridgewell/sourcemap-codec instead
Failing CI seems to be unrelated
@mrbbot It would be nice if we could drop the polyfills, but I don't think it hurts to switch to a more up-to-date & maintained version for now either 🤷♂️
hey @MichaelDeBoey :) definitely agree it's good to stay up to date in the meantime. however per https://github.com/cloudflare/workers-sdk/issues/1232#issuecomment-1699446194, we believe this represents a breaking change and as such will hold off for now. as @mrbbot noted, we plan to audit the polyfills -- if this update is still necessary we'll schedule it as part of the next major release 👍
These new polyfills are higher fidelity than the old ones, which causes the Cloudflare socket detection to fail in the node-postgres library. I have created a fix there to tighten that detection up - https://github.com/brianc/node-postgres/pull/3170
But we will need that fix to go hand in hand with landing this PR.
These new polyfills are higher fidelity than the old ones, which causes the Cloudflare socket detection to fail in the node-postgres library. I have created a fix there to tighten that detection up - brianc/node-postgres#3170
But we will need that fix to go hand in hand with landing this PR.
if you want to exclude net module then you can pass builtinModules.filter(a => a==='net') to https://github.com/imranbarbhuiya/esbuild-plugins-node-modules-polyfill#configure-which-modules-to-polyfill
These new polyfills are higher fidelity than the old ones, which causes the Cloudflare socket detection to fail in the node-postgres library. I have created a fix there to tighten that detection up - brianc/node-postgres#3170
But we will need that fix to go hand in hand with landing this PR.
if you want to exclude net module then you can pass builtinModules.filter(a => a==='net') to https://github.com/imranbarbhuiya/esbuild-plugins-node-modules-polyfill#configure-which-modules-to-polyfill
That sounds like a good solution for now. Thanks
Emptying the net package polyfill resolves the problem with the pg-cloudflare usage.
Here is a link to other projects that might rely upon these polyfills: https://github.com/search?q=node_compat+%3D+true+language%3ATOML&type=code&l=TOML
We should contact these projects and ask them to test out the prerelease of v4.
(There was no need to avoid polyfilling the tls library because the connect function in that polyfill is left unimplemented: https://github.com/jspm/jspm-core/blob/7af7d7413f472305d08d0d78ec3d1f15588be50a/nodelibs/browser/tls.js#L34).
Oops - sorry. I accidentally closed this by pushing the wrong commits when rebasing.
Will open a new PR.
|
GITHUB_ARCHIVE
|
|October 18th 2010: We are getting ready for our first stable release of Audacity in about four years!
Help Us With 2.0
- We would love help with testing.
- Help us with completing or translating the manual for 2.0, which is in its own wiki.
- We also need translators for the program.
- If you're a programmer, check out the developer guide, subscribe to the audacity-devel mailing list, and when you've got Audacity compiling, tell us if there's any bugs on the Bug List that you'd like to tackle. We're particularly keen to hear from people who can help us fix bugs in Windows releases that are holding up 2.0. We also want Audacity to work well on upcoming Ubuntu and Mac releases, which often bring significant changes.
Other Ways To Help
- Help us improve the articles on this wiki and suggest ideas for onward development.
- Help out on the Forum by answering our users' questions. We especially need people to answer questions in other than English.
- Add features that you care about to our Feature Requests page, and/or vote there.
In more detail...
Testers. Test coordinators. Test script writers. Quality system designers and managers. We need you all.
Documentation Writers (The 2.0 Manual)
We also want to hear from any highly experienced users who can help with keeping our officially released documentation such as the online Alpha Manual up-to-date and relevant. We need help bringing this Manual forward to readiness for the 2.0 release.
We always need more developers! Here is a proposed "Audacity Needs You!" page for the main Audacity web site. The intention was to put it in a high traffic area - but it seemed a bit too gung-ho. There's also a landing page, "Audacity Sounds Great" for free developer recruitment FLOSS adverts provided by stackoverflow. Better developer documentation would also help us. Our guides for compiling Audacity could do with streamlining. The relative difficulty of compiling Audacity on Mac and Windows is one barrier to more users getting involved in the development process. Join our developers' mailing list and introduce yourself!
Help us by removing spam links from the Wiki. Improve pages by making the writing clearer and correcting spelling and grammar.
Documentation Writers (This Wiki)
We often link to pages in this Wiki when answering questions from users. Help us keep these pages including the Tutorials up-to-date. As the program changes, these pages need updating. Screenshots on the Wiki need updating too. The current version of Audacity makes capturing the screenshots easier.
The Audacity software and web site is translated into many languages. We need translators for other languages.
Our officially released documentation has never been translated. We welcome offers of help to work on translating our Alpha Manual.
Suggestions on improving/streamlining the way we translate are welcome too.
Forum and -users mailing list
The main way we help users who need personal support with Audacity is on our Forum. If you are an experienced user, you can perform a great service for Audacity by helping to answer Forum questions. Monitoring the questions and issues that arise provides valuable input for our Frequently Asked Questions and influences the wider task of making development decisions.
An alternative way Audacity users help each other and learn about our program is the audacity-users mailing list. This is a subscription-only mailing list where messages to the list are sent out directly by e-mail. Feel free to join audacity-users and give a hand to new users!
Web site text
The main web site needs to be kept up-to-date with releases and features. Of course we also want to keep the site easy to use and free of "typos". We have a list of pending web site changes and we welcome comments about our web site.
We need graphics for Audacity itself, and for our main web site and Wikis. Someone with graphics skills could help us unify the program and our sites. We need a graphic designer with an eye for the whole picture who can help with improving Audacity's graphics in a systematic way.
We need help with campaigns:
- Audacity took part in GSoC (Google Summer of Code) in 2008 and 2009. We always need help planning for future participation, from getting ready to run a GSoC, to alerting potential students to the opportunity, to co-ordinating development during GSoC itself.
- We're considering being part of the next Google Code-in Contest (an open source development and outreach contest targeted at 13-18 year old students around the world).
- Audacity language learning initiative. This is a plan to develop an ecosystem of language learning tools around Audacity. Part of the plan involves building better links with the rockbox project so that Audacity can author audio in structured formats that assists language learners. This was talked about briefly at the 2008 GSoC mentor summit, but has not progressed beyond that.
Ideas? Want to Contact Us?
Didn't find what you were looking for? Here are various ways to contact us.
|
OPCFW_CODE
|
Can a subclass of B (which inherits from class A) inherit from a subclass of A instead of A itself?
I'm working on an accessibility project for an iOS application. Because accessibility does not act quite as advertised, I have to override accessibilityFrame, accessibilityActivationPoint and pointInside:withEvent in a subclass in order to expand the region recognized by VoiceOver (for both drawing and touch recognition) beyond the "natural" bounds of the control view. So, in order to change the VoiceOver bounds of a UIButton I have to subclass that class and then add these three methods. In order to do this for a UILabel I have to add another subclass with the code, and so on.
I can refactor the code in these methods to a central location, but I was wondering if this can be done more elegantly with inheritance. I'd like to put this code into a subclass of UIView (maybe called UIViewAccessible) and then create a subclass of UIButton called UIButtonAccessible which inherits from UIButton which would in turn inherit from UIViewAccessible instead of UIView. Is this possible, or can something like this be done with a category?
Edit: According to the docs, you can't really achieve this with a category:
If the name of a method declared in a category is the same as a method in the original class, or a method in another category on the same class (or even a superclass), the behavior is undefined as to which method implementation is used at runtime.
Is there some other way to do this?
This can't be done as you describe because Objective-c doesn't support multiple inheritance.
Maybe use method swizzling and/or categories to override the framework implementations with your own?
Hmm, I don't use categories much, but I've always perceived them as a way of adding new methods to an existing (probably sealed) class. I didn't realize they could also override existing methods on a class, if indeed they can. If that is the case, then I think my task may be ridiculously easy, as I can just add these three methods to a UIView category, and then I won't even have to subclass anything else.
Looks like categories won't work for this, as it's undefined which implementation of a same-named method is used.
That's only if there are two categories defining a method with the same name. If that's the case then you'll need to swizzle or find another workaround. Also perhaps you can file a bug report with Apple, they seem to take accessibility really seriously they may have a workaround for you or make an improvement.
@CarlVeazey: are you sure ("That's only if there are two categories defining a method with the same name")? In the sentence I posted, two categories with a same-named method is only one of the cases this applies to, the other case being a category with a same-named method as the original class.
@CarlVeazey: Apple does seem to take accessibility seriously, but that doesn't mean they're doing a good job of it. The accessibility stuff does a decent job with a plain, straightforward UI, but if your UI has any unusual custom elements (especially ones that make heavy use of swiping gestures) you're in for some pain.
Another problem with categories is that if you want to override the method, there's no way to call its super implementation.
To answer your question, no, it can't, since your UIViewAccessible is a second degree sibling to UIButton in the inheritance chain (both inherit from UIView at some point). But I guess you already knew that. As for a solution, you could wrap around your UIView accessible classes a decorator and use protocols for strong typing. That way you'll keep the code in one place. I've described this technique here in more detail (although for a different purpose, it's the same situation).
For the views that would support accessibility you'll have to do this:
@property (nonatomic, strong) UIView<MyAccesibilityProtocol>* view;
//self.view can come from the nib or previously created in code
self.view = [[AccesibilityDecorator alloc] initWithDecoratedObject:self.view];
//you can then use self.view like any other UIView,
//and because it also implements an
//accessibility protocol, you can use the methods
//implemented in the wrapper as well.
//more than that, you can control which methods to override
//in the AccesibilityDecorator class
[self.view addSubview:otherView];//could be overridden or not
[self.view myAccesibilityMethod];//custom method declared in the protocol
|
STACK_EXCHANGE
|
SQL Server is a relational database management system (RDBMS) developed by Microsoft. It is primarily designed and developed to compete with MySQL and Oracle database. SQL Server supports ANSI SQL, which is the standard SQL (Structured Query Language) language. However, SQL Server comes with its own implementation of the SQL language, T-SQL (Transact-SQL).
T-SQL is a Microsoft propriety Language known as Transact-SQL. It provides further capabilities of declaring variable, exception handling, stored procedure, etc.
SQL Server Management Studio (SSMS) is the main interface tool for SQL Server, and it supports both 32-bit and 64-bit environments.
Following are the popular editions/types of SQL server:
SQL Server Enterprise: It is used in the high end, large scale and mission Critical business. It provides High-end security, Advanced Analytics, Machine Learning, etc.
SQL Server Standard: Itis suitable for Mid-Tier Application and Data Marts. It includes basic reporting and analytics.
SQL Server WEB: It is designed for a low total-cost-of-ownership option for Web hosters. It provides scalability, affordability, and manageability capabilities for small to large scale Web properties.
SQL Server Developer: It is similar to an enterprise edition for the non-production environment. It is mainly used for build, test, and demo.
SQL Server Express: It is for small scale applications and free to use.
Let’s have a look at the below early morning conversation between Mom and her Son, Tom.
Ask your brain….!!! “Can you map, who is CLIENT and who the SERVER is?”
The most certain reply would be – “I am pretty smart in that and…. Son is a CLIENT as he is requesting for a cup of coffee and Mother, who is CAPABLE of preparing coffee, is a SERVER.”
Here, Tom is requesting his mother, a cup of coffee. Finally, mom does some processing with Milk, coffee, sugar and prepare coffee to serve it hot.
Analogy: MS SQL SERVER architecture.
A CLIENT is an application that sends requests to the MS SQL SERVER installed on a given machine. The SERVER is capable of processing input data as requested. Finally, respond with PROCESSED OUTPUT DATA as a result.
Below are the main components and services of SQL server:
Database Engine: This component handle storage, Rapid transaction Processing, and Securing Data.
SQL Server: This service starts, stops, pauses, and continues an instance of Microsoft SQL Server. Executable name is sqlservr.exe.
SQL Server Agent: It performs the role of Task Scheduler. It can be triggered by any event or as per demand. Executable name is sqlagent.exe.
SQL Server Browser: This listens to the incoming request and connects to the desired SQL server instance. Executable name is sqlbrowser.exe.
SQL Server Full-Text Search: This lets user running full-text queries against Character data in SQL Tables.Executable name is fdlauncher.exe.
SQL Server VSS Writer: This allows backup and restoration of data files when the SQL server is not running.Executable name is sqlwriter.exe.
SQL Server Analysis Services (SSAS): Provide Data analysis, Data mining and Machine Learning capabilities. SQL server is integrated with R and Python language for advanced analytics. Executable name is msmdsrv.exe.
SQL Server Reporting Services (SSRS): Provides reporting features and decision-making capabilities. It includes integration with Hadoop. Executable name is ReportingServicesService.exe.
SQL Server Integration Services (SSIS): Provided Extract-Transform and Load capabilities of the different type of data from one source to another. It can be view as converting raw information into useful information. Executable name is MsDtsSrvr.exe.
SQL Server allows you to run multiple services at a go, with each service having separate logins, ports, databases, etc. These are divided into two:
- Primary Instances
- Named Instances
There are two ways through which we may access the primary instance. First, we can use the server name. Secondly, we can use its IP address. Named instances are accessed by appending a backslash and instance name.
For example, to connect to an instance named xyx on the local server, you should use 127.0.0.1\xyz. From SQL Server 2005 and above, you are allowed to run up to 50 instances simultaneously on a server.
Note that even though you can have multiple instances on the same server, only one of them must be the default instance while the rest must be named instances. One can run all the instances concurrently, and each instance runs independent of the other instances.
The following are the advantages of SQL Server instances:
1. For installation of different versions on one machine You can have different versions of SQL Server on a single machine. Each installation works independently from the other installations.
2. For cost reduction: Instances can help us reduce the costs of operating SQL Server, especially in purchasing the SQL Server license. You can get different services from different instances, hence no need for purchasing one license for all services.
3. For maintenance of development, production and test environments separately: This is the main benefit of having many SQL Server instances on a single machine. You can use different instances for development, production and test purposes.
4. For reducing temporary database problems: When you have all services running on a single SQL Server instance, there are high chances of having problems with the problems, especially problems that keep on recurring. When such services are run on different instances, you can avoid having such problems.
5. For separating security privileges: When different services are running on different SQL Server instances, you can focus on securing the instance running the most sensitive service.
6. For maintaining a standby server: A SQL Server instance can fail, leading to an outage of services. This explains the importance of having a standby server to be brought in if the current server fails. This can easily be achieved using SQL Server instances.
|
OPCFW_CODE
|
We really like mock trials or practice trials, and a couple of years ago we had a very successful one so we thought we’d share a bit more about how everything was set up and how we had filled out the training sheet. Below is Diesel’s and my training before the trial, and my thoughts afterwards.
Instructions for helpers
|Distractions/considerations (distractions I want, I’ll do this or that before releasing the dog, etc)||If something goes wrong (what will I do if something goes wrong, what do I want the helper to do, f ex removing the game, etc)|
|Double marked retrieve on land|
|Double marked retrieve in water||If she runs in I want the helper to pick the game up before she can take it|
|Heelwork||Won’t send her until she’s heeled correctly and we’ve made a nice halt. Will reward often while moving into position.|
|Sweeping up||I want at least one gull. I’ll have the pidgeons close so that I can see how she handles them.|
|Blind retrieve on land|
|Blind retrieve in water||I’ll try to cast her a maximum of three times (tell me if I do more), then we need help with getting her interested in the right spot, f ex by the helper throwing a rock in the water close to the game|
I want to be videotaped Yes No
I want a written evaluation Yes No
Elsa’s comments on the mock trial
I’m very satisfied with my planning and how I carried it out. Diesel was steady by the water and had a nice heelwork almost the whole time, that I rewarded by giving her small pieces of meat balls behind my back (strategic reward placement when she was in the correct position). A couple of times she was slightly unfocused while waiting, especially when we we’re standing with the sweeping up area in front of us and the water behind us. Then she didn’t really follow me when we were turning but rather had her focus in the wrong direction. That’s something we need to practice more.
The things I didn’t write in the planning above were things I was fairly certain would turn out well, and it did (except the blind retrieve on land that was a bit diffuse (there happened to be 15-20 ducks to the right, behind the blind retrieve so she didn’t go straight to the spot but ran by the pond to check them out before I got her to stop and could cast her to the right place).
I’m also happy that she was steady while watching the marked retrieve on water. In fact I was so happy that I actually missed the first mark myself, which she did as well and then she was slightly hesitant to go into the water the second time, but I could cast her out to just about the spot where it was so she got it back.
She was quite hesitant on the blind retrieve on water. I got her about a third of the way, then one of the helpers helped us out by making a small splash, and then she got it.
While hunting/sweeping up (part of trials in Sweden, but not in the UK/US) she was very tempted by the game scent while we were waiting for our turn. Then she hunted really well (picked up the gull first!) even though she worked a bit more with her legs than with her brain.
The double marked retrieve on land was spot on.
To sum it up we need to practice walking behind the gun, getting into position and blind retrieves on water. That will be our homework for the following weeks!
On our bonus material page you can find the instruction to helpers sheet if you want to use it for your own mock trials.
|
OPCFW_CODE
|
import MachineNode from '../../datatypes/graph/graphNode';
import { getPlaceableMachineClasses } from '../../../../graphql/queries';
import { GraphEdge } from '../../datatypes/graph/graphEdge';
const hydrate = (
deserializedData: any,
translate: any = (a: string) => a,
transform: any,
callback: any
) => {
const nodes = deserializedData.nodes;
const edges = deserializedData.edges;
console.time('initial load of classes');
getPlaceableMachineClasses().then((classes: any) => {
console.timeEnd('initial load of classes');
const mapping: any = {};
Object.values(classes).forEach((value: any) => {
mapping[value.name] = value;
});
const nodeMapping: any = {};
const hydratedNodes = nodes.map((node: any) => {
const selectedMachine = {
recipe: node.recipe,
class: mapping[node.machineClass],
tier: node.tier
};
const newNode = new MachineNode(
selectedMachine,
node.overclock,
node.fx,
node.fy,
false,
translate,
transform,
node.id
);
nodeMapping[node.id] = newNode;
return newNode;
});
const hydratedEdges = edges.map((edge: any) => {
return new GraphEdge(
nodeMapping[edge.sourceNodeId],
nodeMapping[edge.targetNodeId],
edge.tier,
true,
edge
);
});
callback({
nodes: hydratedNodes,
edges: hydratedEdges
});
});
};
export default hydrate;
|
STACK_EDU
|
Pithos is a light-weight Pandora radio client for Linux. It consumes lesser system resources than when playing Pandora radio via your web browser. Personally, I also like the comfort of keeping audio play separate from web browser, just to avoid unintended closing of Pandora when I just wanted to close internet browsing.
With ever dropping prices of USB flash drives, it makes sense to use an USB drive to create a installer drive from an ISO image or installer DVD. The advantages are the small form factor and ability to rewrite an updated installer any number of times.
Chrome and Chromium Browsers work great with hardware acceleration enabled. If the browsers sees incompatible graphics card (GPU) in your computer, hardware acceleration gets disabled by default settings, meaning all the load goes on CPU, when running a full HD video content in the browser.
One of the changes in Ubuntu 17.10 is the change in location of window buttons including Minimize, Maximize, and Close buttons. In Artful Aardvark, these control features are located on the top right corner of windows, just like in most of other Linux distros and even Microsoft Windows desktop.
Another day, another command line tutorial. Today, let's talk about an important networking command in Linux, ip. This command is very useful to fish out network parameters of the Linux computer. It works on all Linux distributions including Ubuntu, Arch Linux, Debian, Fedora, and etc..
GNOME extensions are tiny apps that can change the look & feel of desktop, adds handy features to the desktop, and in the process can make life easier by increasing productivity. By default settings of Ubuntu 17.10, GNOME extensions are not enabled. Let's see how to get it installed and configured.
Stacer is a completely free and one-stop system optimizer you have been waiting for. Its powerful features are easy to use in a nice looking user interface. The application is readily packaged as .deb and .rpm binaries, which can be used right away for installing it in Ubuntu.
Today, we will learn how to test internet speed from the command line via Terminal in various popular Linux distributions including Ubuntu, Fedora, and Arch Linux.
When it comes to determining performance of a computer or a operating system, most of you should have heard about 64-bit and 32-bit systems. Both these refer to the way a computer's processor manages the data. So, how to check if your Linux system is 32-Bit or 64-bit?
Ubuntu comes built-in with some powerful recovery tools, which often resolves the issues in typical cases. To boot into Ubuntu Recovery tools, you must first boot into GRUB. Let's check out what things you can fix using this powerful tools!
|
OPCFW_CODE
|
Because it's so easy and fun, I want to add another pattern:
class NthWeekdayPatternTests(unittest.TestCase): def setUp(self): self.pattern = NthWeekdayPattern(1, WEDNESDAY) def testMatches(self): firstWedOfSep2004 = datetime.date(2004, 9, 1) self.failUnless(self.pattern.matches(firstWedOfSep2004)) def testNotMatches(self): secondWedOfSep2004 = datetime.date(2004, 9, 8) self.failIf(self.pattern.matches(secondWedOfSep2004))
I don't have an example of this in my use cases as listed at the beginning of this article, but it's a feature that both calendar and pal support, so I expected to add it at some point.
Making these tests pass shouldn't be too hard:
class NthWeekdayPattern: def __init__(self, n, weekday): self.n = n self.weekday = weekday def matches(self, date): if self.weekday != date.weekday(): return False n = 1 while True: previousDate = date - datetime.timedelta(7 * n) if previousDate.month == date.month: n += 1 else: break return self.n == n
OK, it was harder than I thought. I'm not a huge fan of the way that algorithm looks, but it's OK for now. I really should at least extract it into its own method so I can give it an intention-revealing name:
def matches(self, date): if self.weekday != date.weekday(): return False return self.n == self.getWeekdayNumber(date) def getWeekdayNumber(self, date): n = 1 while True: previousDate = date - datetime.timedelta(7 * n) if previousDate.month == date.month: n += 1 else: break return n
The last example I do have at the beginning of this article is the "last day
of the month" case. That should match days "in reverse." I could modify the
DayPattern class, but instead I want to add a new pattern
class LastDayInMonthPatternTests(unittest.TestCase): def testMatches(self): lastDayInSep2004 = datetime.date(2004, 9, 30) pattern = LastDayInMonthPattern() self.failUnless(pattern.matches(lastDayInSep2004))
While typing in that test, I decided that I couldn't think of a reason to
support the second-to-last day in a month, or the third-to-last day, and so on.
Can you? I made it easy on myself and decided to implement a class called
LastDayInMonthPattern. People usually argue that writing tests up
front takes too much work, but writing a test first this time actually saved me
from writing code I would never use!
The implementation of this new pattern is:
class LastDayInMonthPattern: def matches(self, date): tomorrow = date + datetime.timedelta(1) return tomorrow.month != date.month
Although I just realized I'm cheating again. Here's how the fixture should
have looked (after extracting out the
setUp method) before fully
class LastDayInMonthPatternTests(unittest.TestCase): def setUp(self): self.pattern = LastDayInMonthPattern() def testMatches(self): lastDayInSep2004 = datetime.date(2004, 9, 30) self.failUnless(self.pattern.matches(lastDayInSep2004)) def testNotMatches(self): secondToLastDayInSep2004 = datetime.date(2004, 9, 29) self.failIf(self.pattern.matches(secondToLastDayInSep2004))
I feel pretty good about myself right now. That usually means it's time to refactor. Now I really want to do some renaming.
For the last pattern I implemented, I was very explicit about what it did:
LastDayInMonthPattern only matches the last day in a month and
there's no further clarification needed. What about
LastWeekdayPattern? I really
want to add
InMonth to the end of both of those class names. Yes,
I'm that picky.
I also took this time to rename a few of the test cases and reorder some of the class definitions. I won't bore you with the details, but you can see the final results for yourself if you download the code at the end of the article.
This type of tidying up may seem trivial but it's extremely important. If your code doesn't look clean, you (and others who find themselves working on your code) won't have any incentive to keep it clean. The Pragmatic Programmers call this the Broken Window Theory. If you live with broken windows, don't be surprised when your neighbors start using your lawn as a junk yard.
I have about 60 non-blank lines of code so far, spread across eight classes. That's not too much, but the code is very simple and yet highly flexible. I seriously doubt I would have been able to conceive of this design without writing my tests first.
What's even cooler is that I have about 90 non-blank lines of test code. Yes, I have more test code than I have "real" code. Is that wrong? Absolutely not. That's wonderful! I feel extremely confident about the quality of the code that I have so far. Is it perfect? I doubt it. When I discover a bug, though, I can add a new test to demonstrate it and fix it so that it never happens again. If I need to perform an optimization, I'll have a suite of tests I can use to verify that I didn't screw anything up while applying the optimization.
What's also interesting to note is that the design that emerged from this work. I spent zero time in front of a modeling tool trying to create a design that would both meet my needs today and still be elegant enough to (hopefully) meet all of tomorrow's needs, as well. I didn't intend for this to happen. It just magically happened that way. This isn't rare--this almost always happens when I do test-driven development.
How is this design more flexible than I originally intended? Suppose that I want to create a pattern that matches every Friday the 13th. That wasn't one of my original use cases, and I gave no thought to it while writing the tests. The classes I came up have no trouble representing that pattern:
>>> import DatePatterns >>> fri13 = DatePatterns.CompositePattern() >>> fri13.add(DatePatterns.WeekdayPattern(DatePatterns.FRIDAY)) >>> fri13.add(DatePatterns.DayPattern(13)) >>> import datetime >>> aug13 = datetime.date(2004, 8, 13) >>> aug13.strftime('%A') 'Friday' >>> fri13.matches(aug13) True >>> sep13 = datetime.date(2004, 9, 13) >>> sep13.strftime('%A') 'Monday' >>> fri13.matches(sep13) False
While I'm not done with the application yet, I do have a solid foundation to build on. Next, I need to add some parsing code so that I can read a file containing events in order to construct and use the patterns I implemented above. I'll visit that task next time.
Jason Diamond is a consultant specializing in C++, C#, and XML, and is located in sunny Southern California.
Return to the Python DevCenter.
|
OPCFW_CODE
|
Programming for Kids: What Are the Best Programming Languages for Kids and Beginners?
As technology plays more and more important role in our daily lives, mastering key computer programming skills is no longer just an important, forward-thinking decision for progressive individuals, but rather a necessity for adults and kids alike. In fact, programming for high school kids is increasingly becoming a hot topic.
Today, even elementary school kids can learn the basics of coding. Acquiring these skills at a tender age can greatly influence their overall development. In fact, studies have shown that coding strengthens a student’s other academic areas like reading, math, and spelling.
Kids who master programming and coding languages will transform into better problem solvers, and will be capable of developing sound analytical and deductive reasoning skills and greater quest for knowledge.
Software is the language of the moment, and teaching kids how to program at a younger age will certainly come in handy in their future life.
That said, it is important to note that not all programming languages are the same. With so many languages out there, it can be tricky choosing the best ones for kids or beginners.
The main types of programming languages
- Compiled languages – These are programming languages that have to be fully written out before running; meaning, they cannot be processed line-by-line like other programming languages. Ideally, compiled languages are much harder to learn since they cannot be edited or tested. Examples of compiled programming languages include C++ and BASIC.
Programming languages can also be broken down into two categories:
- Procedural programming – Without exaggeration, this is the core of all programming languages. They are useful for understanding how computers think with structures like “if this, then that.” An example of procedural programming language is Python.
- Object-oriented programming – These are the programming languages used by professional programmers. They come with several different objects (which are like mini-programs) interacting simultaneously with each other. Object-oriented programming languages are complex and can be confusing to follow. It is recommended that you introduce your kid these languages after mastering procedural programming languages. An example of an object-oriented programming language is Java.
What Coursera platform provides
Courses – the site has over 1600 courses in 700 topics. 40 percent of these courses are focused on business, management, computer science, and data science.
Foreign languages – Coursera provides lectures in 8 different courses including English, Spanish, French, Russian, Chinese, etc.
Course types – The platform offers a range of specializations like business lectures, technical courses, and even humanity and biological classes
Instructors – Coursera tutors and course creates are experienced professionals from around the world. They are either university professors or professionals who work at reputable global institutions. The platform has over 4,000 tutors and course instructors.
So, what makes the best programming language for kids?
If you have spent time programming, either as a hobby or professionally, you have probably tried out a few programming languages. But the first programming language you learn will be special; and it will stick with you forever. So, how do you choose the best programming language to learn?
Start by choosing your goals
Basically, computer programming is the art of directing a computer to perform tasks. There are different kinds of programs that you can build. Before choosing a programming language to learn, it is important that you identify a career path that you might be interested in taking as a computer programmer. Here are some considerations:
- Website development ( full-stack, front-end, or back-end)
- App developer (Android or iOS)
- PC game developer
- Financial services
- Software developer
- Data management
Top 9 programming languages for kids and beginners
Java is popularly known as the official language for developing Android application. This object-oriented language is quite easy to learn and developers using this programming language have tens of open source libraries to access and choose from.
For kids and beginners, the greatest motivation for learning Java programming language is learning how to build Minecraft on their own. Released in 2011, Minecraft kids from all over the world have fallen in love with this app. Kids can capitalize on their love for Minecraft to learn how to use logic in Java and solve different kinds of problems using this programming language.
Stability, scalability, high adaptivity, special software, graphical interfaces, perfect for developing applications and game engines.
Best online Java courses for kids:
Swift is definitely one of the best programming languages for kids and beginners. This is because this programming language comes with advanced features while requiring minimal coding. Additionally, it comes with a guideline that makes it easy for kids and beginners to convert Swift into game-like behavior. The language also allows development with simple drag-and-drop codes.
Swift features: Drag –and-drop code, free to download, best for developing applications on Apple platform.
Best online Swift courses:
Alice is a free 3D tool that is designed to teach the concepts of object-oriented programming. Kids can learn how to program with Alice by creating games or animations in 1 and 3-D models as well as camera motions using the building blocks approach. Additionally, Alice’s easy play button coupled with the drag-and-drop interface makes it extremely easy for kids and beginners to learn this programming language. Overall, Alice is an excellent programming language for kids who want to learn coding in a block-based virtual environment.
Alice’s features: drag-and-drop coding, object-oriented programming, block-based visual environment.
Online Alice courses:
4. Scratch 3.0
Scratch 3.0 is a great programming language that introduces kids to coding foundations. This language comes with a visual coding environment that makes it possible for users to develop games, applications, and characters with a drag-and-drop code blocks. Scratch is supplemented by beginner tutorials. It also comes with a building-block visual interface, and it can be used even offline. All these make Scratch one of the best programming languages for kids.
Scratch features: Free to download, block-style storytelling, comes with beginner tutorials, building-block visual interface, kid-friendly, can be used offline.
Top online scratch courses:
Blocky is a direct competitor of Scratch. Just like Scratch, it utilizes the same interlocking building blocks to develop applications. Its visual block function makes it easier for kids to master coding. Blocky is the backbone of the Android application Inventor, providing kids with a robust environment for kids to learn how to code.
Blocky features: comes with interlocking building blocks, the code is visible on the coder’s screen, capable of outputting the code in different programming languages, can switch programming languages, the building block for Android App Inventor, perfect for teaching coding to kids of all age groups.
C++ considered the foundation of most programming languages. It has the capability for developing enterprise applications. C++ utilizes a compiler-based approach that is quite simple yet very effective for developing applications. Thanks C++ versatility, you can use it for developing applications on multiple platforms.
C++ features: Great for building applications that can run locally on machines, cross-platform game development, ideal for developing Windows desktop applications.
Best online C++ courses:
There is no doubt that Python is one of the easiest coding languages to learn. This is because Python only requires a few code lines to become operational, and this makes it relatively easier for beginners like kids to learn and use for creating applications.
Python is a versatile programming language that is commonly used in highly-advanced fields such as Cyber security and Artificial Intelligence. It can also be used for creating numeric and scientific computing programs, video games, and web frameworks.
Python features: A versatile programming language with simple syntax, plenty of beginner books and tutorials, and Pygame toolkit.
Online Python courses:
Another object-oriented programming language with clear syntax, Ruby follows the Principle of Least Astonishment (POLA) philosophy. Ruby is an ideal programming language for kids and beginners because it is designed to make coding as simple and uncomplicated as possible. Better still, Ruby is quite natural, consistent, and easy to master and remember.
Ruby features: A case-sensitive, object-oriented, singleton method with expressive features, naming conventions, dynamic typing, statement delimiters, portable, duck typing, and exception handling.
Online Ruby courses:
Best programming languages for kids: Final thoughts
There is no doubt that the computer programming can seem daunting. Many give up at the first steps because they do not understand the terminology. For kids, however, it is important that they start their programming journey on the right footing. Introducing them to programming languages that are either too complex or out of line with their interests can put them off coding for their rest of their lives.
Thankfully, choosing the best programming languages for kids is not too difficult. All you need to do is identify a language that will align with your child’s aspirations and then narrowing down to a handful of languages. Sign up with the courses attached to these programs and get your kid started on their coding journey today.
|
OPCFW_CODE
|
You will work on a variety of projects, using a mix of development tools, platforms, and skill sets. You can become involved in requirements analysis and application design as you gain experience within the Systems & Content Management Team.
Responsible for building and configuring SharePoint websites using web UI or SharePoint Designer and developing code for SharePoint.
Responsible for developing applications around the core concepts of business process management, collaboration, business intelligence or enterprise content management.
Assisting in project development, defining technical requirements and developing solutions; on-going support for applications; designing and customizing SharePoint solutions; overseeing architectural design and integration content management, portals, collaboration, business process or other solutions; troubleshooting and debugging SharePoint sites; and working as a consultant on SharePoint customization projects.
Excellent communication skills, technical writing skills, solid presentation skills, problem-solving skills, strong analytical skills, customer service skills and ability to work independently and as part of a team.
Experience working in both Agile and waterfall SDLC processes.
Must also be self-motivated, detail oriented and ability to multi-task.
Strong SharePoint knowledge including: SharePoint Services, Designer, Web Parts, Site Collections, InfoPath, Forms Services, Excel Services, .Net, Search, Workflow's, Navigation customization, Business Data Catalog, SQL Server and Web Services
We are looking for a candidate with strong analytical skills. The candidate must be team-focused with a strong customer service approach. Attention to detail, strong work ethic and communication skills will be highly leveraged in this position. The candidate must have 3+ yrs experience with SharePoint 2010.
- Strong critical thinking skills
- Exceptional communicator
- Emphasis quality control
- Ability to work well in a team environment
- Ability to quickly learn and understand business basics business
The position is approximately:
- 85% Portal / SharePoint configuration, development and coding
- 10% Testing and integration
- 5% Administrative duties
Code, Test, Document and Implement application programs according to client specifications. Provides ongoing maintenance and production support of existing application programs in a coordinated effort.
Portal / SharePoint Functions:
- Senior-level expertise with Microsoft SharePoint 2010 and MOSS technologies including web part development, object model, site definitions and features.
- Experience creating/assembling custom Web Parts, specifically for Enterprise Search functionality and web part UI elements customizations.
- Knowledge of workflows, SharePoint Server 2010, SharePoint Foundation , Sharepoint Services 3.0, and Designer
- Broad experience with .NET Framework, [url removed, login to view] and C#.
- Experience with Collaboration, Portals, Enterprise Search, Enterprise Content Management
- Experience with SharePoint Templates (Application, Master Pages, and Role-Based)
- Experience creating custom controls in .NET and deploying them in SharePoint 2007
- Must have a detailed and clear understanding of HTML and Cascading Style sheets (CSS).
- QA experience in testing site features.
- Experience with SQL Server 2008
Work Remote | H1B requires MCP*
17 freelancers are bidding on average $15/hour for this job
Hello, We have the required expertise that you are seeking in sharepoint using web UI as well as coding in .net Can you view our PMB for more details. Regards, Simron John
Thank You for posting such an interesting project. We can develop this web & android based application with full perfection. For further discussion please check private message. Thanks & Regards
|
OPCFW_CODE
|