Article: Prefab homes from Cover are designed by computer algorithms

06 Apr 2017

Specializing in backyard studios

If you’re in the market for a prefab dwelling—either as a full-time home or backyard unitoptions are aplenty. What L.A.-based startup Cover wants to add to the equation is a tech-driven efficiency that makes the whole design and building process a total breeze for the customer.

As detailed in a new profile on the company over on Co.Design, Cover sees itself as more of a tech company than a prefab builder. Indeed, whereas a typical prefab buying process would begin with choosing one of a few model plans and maybe then consulting with architects to tweak the design for specific needs, Cover turns the whole design process over to computer algorithms. Co.Design explains:

Once customers begin the design process, Cover sends them a survey of about 50 to 100 questions to inform the design. It asks about lifestyle–how many people typically cook a meal and what appliances are must-haves?–and structural needs, like should they optimize one view and block another one?

The company also use computer modeling to optimize window placement, cross-ventilation, and natural light, making use of zoning, sun-path, and geospatial data. All of these parameters are then sent to a proprietary computer program that spits out hundreds of designs that satisfy the requirements supplied.

Here are a couple of key things to know about Cover’s prefabs:

  • The company is specializing in the accessory dwelling unit, which is a secondary structure on a property with an existing single-family house. They can serve as guesthouses, in-law units, offices, yoga studios, and potentially a source of rental income.
  • While the computer will churn out a whole bunch of designs, Cover dwellings generally have a minimal modern look with an insulated steel structure, glass walls, and built-in storage.
  • When you order with Cover, the company takes care of the whole process, from coming up with a design, as described above (which takes three business days and $250), to acquiring necessary permits (two to five months, $20,000), to building and installation (12 weeks, final price contingent on the specific design). Some sample costs offered on the website are as follows: $70,000 for a guest room, $130,000 for a studio with a kitchenette, $160,000 for a one-bedroom unit, and $250,000 for a two-bedroom unit.

Via: Co.Design

Tags: #mscedc

April 06, 2017 at 11:40PM


Article: We Just Created an Artificial Synapse That Can Learn Autonomously

We Just Created an Artificial Synapse That Can Learn Autonomously


A team of researchers has developed artificial synapses that are capable of learning autonomously and can improve how fast artificial neural networks learn.

Mimicking the Brain

Developments and advances in artificial intelligence (AI) have been due in large part to technologies that mimic how the human brain works. In the world of information technology, such AI systems are called neural networks. These contain algorithms that can be trained, among other things, to imitate how the brain recognizes speech and images. However, running an Artificial Neural Network consumes a lot of time and energy.

Image Credit: Sören Boyn/CNRS/Thales physics joint research unit

Now, researchers from the National Center for Scientific Research (CNRS) in Thales, the University of Bordeaux in Paris-Sud, and Evry have developed an artificial synapse called a memristor directly on a chip. It paves the way for intelligent systems that required less time and energy to learn, and it can learn autonomously.

In the human brain, synapses work as connections between neurons. The connections are reinforced and learning is improved the more these synapses are are stimulated. The memristor works in a similar fashion. It’s made up of a thin ferroelectric layer (which can be spontaneously polarized) that is enclosed between two electrodes. Using voltage pulses, their resistance can be adjusted, like biological neurons. The synaptic connection will be strong when resistance is low, and vice-versa. The memristor’s capacity for learning is based on this adjustable resistance.

Better AI

AI systems have developed considerably in the past couple of years. Neural networks built with learning algorithms are now capable of performing tasks which synthetic systems previously could not do. For instance, intelligent systems can now compose music, play games and beat human players, or do your taxes. Some can even identify suicidal behavior, or differentiate between what is lawful and what isn’t.

This is all thanks to AI’s capacity to learn, the only limitation of which is the amount of time and effort it takes to consume the data that serve as its springboard. With the memristor, this learning process can be greatly improved. Work continues on the memristor, particularly on exploring ways to optimize its function. For starters, the researchers have successfully built a physical model to help predict how it functions. Their work is published in the journal Nature Communications.

Soon, we may have AI systems that can learn as well as out brains can — or even better

Author Dom Galeon April 5, 2017

Tags: #mscedc
April 06, 2017 at 03:24PM
Open in Evernote

Article: Unpaywall Is New Tool For Accessing Research Papers For Free

“Unpaywall” Is New Tool For Accessing Research Papers For Free

April 5, 2017 by Larry Ferlazzo

As anyone who has tried to pursue even a little bit of academic research can attest, publishers charge an arm-and-a-leg to access studies if you are not part of an institution that subscribes to their journals. And the authors of those studies don’t even get any of that money!

Last year, Sci-Hub broke through that barrier in one attempt (which may or may not be legal) to create more access – see The Best Commentaries On Sci-Hub, The Tool Providing Access to 50 Million Academic Papers For Free.

Today, another option was unveiled.

Today we’re launching a new tool to help people read research literature, instead of getting stuck behind paywalls. It’s an extension for Chrome and Firefox that links you to free full-text as you browse research articles. Hit a paywall? No problem: click the green tab and read it free!

The extension is called Unpaywall, and it’s powered by an open index of more than ten million legally-uploaded, open access resources.

Apparently, many institutions now require their faculty upload their published papers to their libraries, and that is a primary source for Unpaywall research.

I just tried it and it seems to work fairly well…

Tags: #mscedc
April 06, 2017 at 03:20PM

Impressive Adobe Algorithm Transfers One Photos Style Onto Another

Impressive Adobe Algorithm Transfers One Photo’s Style Onto Another

Mar 29, 2017


Two pairs of researchers from Cornell University and Adobe have teamed up and developed a “Deep Photo Style Transfer” algorithm that can automatically apply the style (read: color and lighting) of one photo to another. The early results are incredibly impressive and promising.

The software is an expansion on the tech used to transfer painting styles like Monet or Van Gogh to a photograph like the app Prisma. But instead of a painting, this program uses other photographs for reference.

“This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style,” says the rather technical abstract of the Deep Photo Style Transfer paper.

Put more plainly: when you put in two photographs, the neural network-powered program analyzes the color and quality of light in the reference photo, and pastes that photo’s characteristics onto the second. This includes things like weather, season, and time of day—theoretically, a winter’s day can be turned into summer, or a cloudy day into a glorious sunrise.

The team’s early examples show the program in action. So this original photo:

Plus this reference photo:

Equals this final photo:

It’s important to note that the software does not alter the structure of the photo in any way, so there’s no risk of distorting the lines, edges or perspective. The entire focus is on mimicking the color and light in order to copy the “look” or “style” of a reference photograph onto a new shot.

Since this is a lot easier said than done, the program has to intelligently compensate for differences between the donor and receiving image. If there is less sky visible in the receiving image, it will detect this difference and not cause the sky to spill over into the rest of the original shot, for example.

The software even attempts to “achieve very local drastic effects,” such as turning on the lights on individual skyscraper windows, all without altering the original photo by moving windows around or distorting edges.

In the future, a perfected version of this technology could make its way into Photoshop as a tool, or run as a separate program or plug-in. Not that you should bank on this tech fixing the photos from your upcoming trip; like any other new technology, there is work to be done.

“The study shows that our algorithm produces the most faithful style transfer results more than 80% of the time,” the paper cautions. So maybe you can’t change Ansel Adam’s Moonrise, Hernandez to a Sunrise, Hernandez, but you get the picture (no pun intended) and it is very promising.

Tags: #mscedc

March 30, 2017 at 12:10AM

Article: Google’s new algorithm shrinks JPEG files by 35 percent


Google’s new algorithm shrinks JPEG files by
35 percent
David Lumb/17 Mar 2017

For obvious reasons, Google has a vested interest in reducing the time it takes to load websites and services. One method is reducing the file size of images on the internet, which they previously pulled off with the WebP format back in 2014, which shrunk photos by 10 percent. Their latest development in this vein is Guetzli, an open-source algorithm that encodes JPEGs that are 35 percent smaller than currently-produced images.

As Google points out in its blog post, this reduction method is similar to their Zopfli algorithm that shrinks PNG and gzip files without needing to create a new format. RNN-based image compression like WebP, on the other hand, requires both client and ecosystem to change to see gains at internet scale.

If you want to get technical, Guetzli (Swiss German for “cookie”) targets the quantization stage of image compression, wherein it trades visual quality for a smaller file size. Its particular psychovisual model (yes, that’s a thing) “approximates color perception and visual masking in a more thorough and detailed way than what is achievable” in current methods. The only tradeoff: Guetzli takes a little longer to run than compression options like libjpeg. Despite the increased time, Google’s post assures that human raters preferred the images churned out by Guetzli. Per the example below, the uncompressed image is on the left, libjpeg-shrunk in the center and Guetzli-treated on the right.

Tags: #mscedc
March 17, 2017 at 04:10PM

Article: Computer says no: New Jersey is using an algorithm to make bail recommendations

Computer says no: New Jersey is using an algorithm to make bail recommendations

ByFebruary 24, 2017 4:21 PM

Tags: #mscedc
March 07, 2017 at 08:53PM
Open in Evernote

Article: A Pedagogical Shift Needed for Digital Success

A Pedagogical Shift Needed for Digital Success

In a previous post I discussed in detail strategies to help ensure the effective use of technology to improve learning outcomes. You don’t have to be a fan of technology, but you do need to understand that it’s a catalyst for some exciting pedagogical changes.  The purposeful use of technology can innovate assessment, transform time frames around learning, increase collaboration, enable learning about information and research thanks to unprecedented access, and provide a level of student ownership like never before. These are all outcomes that any educator would (or should) openly embrace. 

I get the fact that technology can increase engagement, but if that engagement does not lead to evidence of learning then what’s the point?  Like it or not, all educators are being held accountable in some form or another for improvement in learning outcomes that result in an increase in achievement.  This is why evidence of a return on instruction (ROI) when integrating technology is critical. Just using it to access information is also not a sound use. As teachers and administrators we must be more intentional when it comes to digital learning.  If the norm is surface-level integration that asks students to demonstrate knowledge and comprehension the most beneficial aspects of digital are missed. A recent article by Beth Holland for Edutopia reinforced many of my thoughts as of late on this topic. Below some words of caution from her:


Student agency is one of the most powerful improvements that technology can provide.  This is the ultimate goal in my opinion, but to begin to set the stage for consistent, effective use a uniform pedagogical shift has to be our focus when it comes to digital learning.  The Rigor Relevance Framework provides a solid lens to look at the learning tasks that students are engaged in and redesign them in ways that move away from telling us what they know and instead showing whether or not they actually understand.

This simple, yet powerful shift can be applied to all digital activities. Now I full understand there is a time and place for basic knowledge acquisition and recall, especially at elementary level. However, the goal should be an evolution in pedagogy, especially assessment, where students can demonstrate conceptual mastery in a variety of ways. Instead of using technology to ask students what the capitol is of a state or country ask them to create a brochure using a tool of their choice and explain why the capitol is located where it is.  When designing digital learning tasks think about how students can demonstrate understanding aligned to standards by:

  • Arguing  
  • Creating
  • Designing 
  • Inventing
  • Concluding
  • Predicting
  • Exploring
  • Planning
  • Rating
  • Justifying
  • Defending
  • Comparing

It is important to understand that the verbs above should apply to a range of innovative learning activities, not just those involving digital tools.  By moving away from the use of technology to support low-level learning tasks we can really begin to unleash it’s potential while providing students with greater relevance through authentic work.  This shift will take some time, but the ultimate learning payoff is well worth it. Below are some examples of how my teachers made this shift when I was the principal at New Milford High School:

Lend a critical lens to your digital learning activities to being to develop more activities where students demonstrate what they understand as opposed to what they just know. As pedagogy evolves in step with technology, a key to success will be to ensure that meaningful, high-level, and valuable learning results. 

Posted by

Tags: #mscedc
February 27, 2017 at 12:51PM
Open in Evernote

Article: Are Teachers Becoming Obsolete?

Are Teachers
Becoming Obsolete?

Paul Barnwell/15 Feb 2017

A veteran educator reflects on the personalized-learning trend that’s left him wondering if a computer is more capable of doing his job than he is.

Kacper Pempel / Reuters

Leaving my school building the other day, I had an unexpected realization: Perhaps a computer was a more effective teacher than I currently was. The thought unnerved me, and still does as I’m writing this. I’m a nearly 13-year veteran educator dedicated to reflecting upon and refining my teaching craft. But I’m now considering the real possibility that, for at least part of a class period or school day, a computer could—and maybe should—replace me.

For the past several weeks, I’ve begun class with a simple routine: Students enter the room, grab a new Chromebook, log on to the Reading Plus program, and spend roughly 20 minutes working at their own pace. I stroll around the room and help with technology troubleshooting or conference with students, quietly chatting about academic progress or missing work. I’ve also found myself pausing, marveling at what this program promises to accomplish: meeting students where they are academically and, at least in theory, helping a wildly diverse group of students improve their literacy skills.

Developments in education technology promise to assist teachers and school systems in supporting struggling students by providing individualized instruction. But at what cost? As a teacher, it’s difficult to adapt to and embrace a machine that—at least for part of the time—takes over for me. The processes of teaching and learning are complex and innately human; I value the time I take to develop relationships with my students. But it’s hard not to wonder if that time could better be spent with adaptive learning technology.

My third-period sophomore English class at Fern Creek High School in Louisville, Kentucky, contains a wonderful mix of students hailing from the neighborhood and around the globe—my students represent Jordan, Afghanistan, Democratic Republic of Congo, Tanzania, Russia, and Mexico. I’ve thoroughly enjoyed getting to know how students arrived in our classroom in addition to hearing about their hopes, fears, and dreams. With this diversity also comes a huge range of student ability. Computerized reading assessments and other benchmarked tests reveal that roughly 90 percent of my class is behind grade level in reading.

How could I possibly create 27 customized lessons?

About half of those students are at least four grade levels behind. My own anecdotal observations support this challenging reality as well. And across the country, only 34 percent of eighth-graders scored proficient or above in reading in 2015 according to the Nation’s Report Card. School districts’ attempts to improve literacy achievement are pervasive, and our school administration’s mandate to employ Reading Plus in most of our freshman and sophomore English classes reflects this.

I’d love to be able to provide individual instruction to my third-period class. One problem—and it’s a big one—is that I don’t know how to teach reading to students who are either new to the language or far behind grade level. And I know I’m hardly alone as a high-school English teacher in this tenuous position. I’ve earned an undergraduate degree in American literature, a master’s in teaching, and master’s in English literature. Yet these credentials haven’t equipped me with the necessary background or skills to significantly improve my students’ reading ability. I’m not trained as a reading specialist. Even if I were, how could I possibly create 27 customized lessons? Maybe Reading Plus can do some of what I can’t.

During the independent, silent work periods at the start of my class, the program adapts to students’ reading speed and comprehension ability, creating a customized scrolling illumination—imagine a rectangular flashlight beam only highlighting the text your eyes scan. Many students seem to embrace this moving target; at the least, they are more physically engaged with reading than ever before, and the program seems to be motivating a clear majority of students.

Reading Plus is emblematic of a growing trend toward personalized learning in public education; it’s the idea that schools can better serve students by providing more customized instruction. The term personalized learning refers to a vast array of approaches to education; examples include a high school in Deer Isle, Maine, and its radical curriculum overhaul to meet needs of individual learners in more creative ways, as well as San Diego’s High Tech High, where student-designed, long-term passion projects are paramount to the learning process.

Personalized learning, however, often manifests itself in school districts in less dynamic ways than in Maine and at High Tech High. The initiatives often become software or technology-based, with digital “instruction” adjusting based on competency levels or skills of its student users. It’s not about student passion or authentic projects—it’s all about remediating and measuring specific academic skills.

And as I’ve experienced first-hand, the role of teachers shifts dramatically with the adoption of these adaptive programs. Instead of a teacher striving to know a student on multiple levels—from understanding the nuances of his or her academic skills, to building positive relationships and crafting learning experiences based on more than numerical reading scores—educators are on the sidelines while a machine takes over. Personalized learning often becomes inherently impersonal; it’s a sterile approach to messy, complex classroom processes. And there’s also big money at stake for education-technology companies and curriculum publishers who are taking advantage of pressure to increase academic achievement.

While we are still a community of learners, it feels less dynamic, even if students are making incremental reading gains.

According to this 2014 Education Week report, the Federal Department of Education Race to the Top competition awarded 16 school districts $350 million dollars to support efforts to personalize learning, often including adaptive software and digital tools as part of their plans.

For example, Miami-Dade Public Schools’ plan included buying access to Carnegie Learning’s Mathia, a program that “tutors” middle-school students in math. Carson City, Nevada’s, school system included a plan to incorporate MasteryConnect, which, according to report, is updated in real time as students take assessments, looking at mastery of learning targets (or specific academic skills). I wonder if educators in these locales are feeling as conflicted as I am.

Critics of the software-driven personalized-learning trend, including the author Alfie Kohn and FairTest, an organization dedicated to curtailing misuses and flaws of standardized testing, contend that there are significant problems with this approach. Kohn laments school districts’ focus on improving test scores as a catalyst in software adoption. One of the issues addressed in this FairTest post is that “frequent online student assessments require teachers to review copious amounts of data instead of teaching, observing and relating to students.” I agree with both of these criticisms, particularly the idea of losing more opportunities for human interaction in favor of customized screen time.

In 2014, I wrote a piece for The Atlantic titled “My Students Don’t Know How to Have a Conversation,” arguing that students’ reliance on screen time is detracting from their ability to communicate verbally. And now school systems are adopting programs designed to keep students glued to yet another screen for reading practice, which, by design, is a closed system. With Reading Plus, students do not have the shared experience and discussions after reading the same text, like when we analyze Kurt Vonnegut’s short story “Harrison Bergeron” or The Color Purple together. It’s all individualized, silent work. While we are still a community of learners, it feels less dynamic, even if students are making incremental reading gains according to the program.

For struggling readers and writers, it’s understandable that teachers, schools, and systems are striving to do whatever it takes to improve literacy levels. But whether struggling students are better off graduating from high school having been remediated by personalized-learning software versus more dynamic learning experiences, even if their reading skills marginally improve, remains an open question. I’m hopeful that this blended approach to teaching and learning—the combination of using technology-assisted activity and more traditional face-to-face methods—will be useful for my students. And I wasn’t always open to this possibility.

When I first read Michael Godsey’s essay for The Atlantic, “The Deconstruction of the K-12 Teacher,” a few years ago, I scoffed at the idea of teachers being replaced by classroom technology facilitators. Godsey writes, “The ‘virtual class’ will be introduced, guided, and curated by one of the country’s best teachers (a.k.a. a ‘super-teacher’), and it will include professionally produced footage of current events, relevant excerpts from powerful TedTalks, interactive games students can play against other students nationwide, and a formal assessment that the computer will immediately score and record.”

In Godsey’s vision, those who currently serve as classroom teachers—like myself—would be replaced or forced to make radical changes in becoming a facilitator instead. Yet in the world of software-driven personalized learning, Godsey’s “super-teacher” isn’t even needed—only folks who can keep students behaved and on-task. I’ve reread the piece and agree with some of it’s conclusions: There’s no doubt the role of teachers is changing rapidly in many school districts towards more facilitation. Like Godsey, I’d struggle to tell a young teacher in training what to expect in the coming years—but there’s no doubt that blended learning will only increase in popularity. For now, I’m okay with my changing role, and it’s too early to tell if Reading Plus is worth the time and students’ effort.

As I write my lesson plans for next week, I chunk out the daily time needed for students to engage with their personalized learning. I tell myself I’m still needed for the 45 minutes they aren’t tracking the illuminated scrolling target. I can still do my best to impart a love of writing, attempt to spark passions, encourage curiosity, foster discussions, smile, laugh, and interact with the students in ways a screen can’t, even if Reading Plus “knows” more technical information about their reading levels than I ever could.

Tags: #mscedc

February 15, 2017 at 10:21PM

Open in Evernote

Article: How To Get $20,000 Off The Price Of A Master’s Degree

How To Get $20,000 Off The Price Of A
Master’s Degree

Kirk Carapezza/15 Feb 2017

There’s an experiment underway at a few top universities around the world to make some master’s degrees out there more affordable.

The Massachusetts Institute of Technology, for example, says the class of 2018 can get a master’s degree in supply chain management for more than $20,000 off from the university’s normal price, which runs upwards of $67,000 for the current year academic year.

But it’s not as simple as sending in a coupon with your tuition bill.

It’s called a “MicroMasters.” MIT, Columbia University, the University of Michigan and the Rochester Institute of Technology are among a dozen or so universities globally that are giving this online program a shot.

It’s not a full degree, but a sort of certificate, and can be a step toward a degree.

There are things in it for students, and for the school.

What’s in it for students: cost

Let’s take Danaka Porter as an example. She’s a 31-year-old business consultant from Vancouver, British Columbia, and says a master’s degree was exactly what she needed to boost her career.

“I found that people were a little bit more respected, I guess, once they had their master’s because it was like they had taken that next step to go a little bit further,” she says.

But she couldn’t afford to stop working and become a full-time student again. She owns a house, she says, and “I have bills, and all of that stuff that doesn’t stop because I wanted to go to school.”

When a friend told that MIT was piloting its first partially online master’s degree in supply chain management, she signed up.

The tuition for a year in the master’s of supply chain management costs $67,938. Her MicroMasters certification, though, is just $1,350.

It’s called a MicroMasters because it isn’t a full degree, just a step toward one, though Porter says the coursework is just as rigorous as if she were on MIT’s campus in Cambridge.

“It requires a lot of effort and if you don’t have a background in math, engineering or supply chain it’s not a breeze. Like, we do have people that fail,” she says.

Even if she passes the certification, Porter will still need to complete a semester “in residence” at full cost if she wants to finish her graduate degree. It’s part of what MIT calls the “blended” program — online and on-campus.

Getting accepted is no easy task. MIT says it expects to admit 40 students a year into the blended program.

Some top schools from around the world are on board with MIT.

There’s user experience research and design from the University of Michigan; entrepreneurship from the Indian Institute of Management Bangalore; and artificial intelligence from Columbia University, among others.

Even if students don’t go for a full master’s, the online course work can make them more appealing to employers.

Industry leaders who say they can’t find enough qualified candidates are looking for very specific skills like the ones being taught. GE, Walmart, IBM and Volvo have recognized MicroMasters and are encouraging their employees and job applicants to take these courses.

Some students who are enrolled in MIT’s on-campus program wish these online courses had been available to them before spending big on their degrees.

“If this was an option, I think I would have considered it,” says Veronica Stolear, a graduate student at MIT from Caracas, Venezuela. She quit her job in the oil industry to earn her master’s in supply chain management. Ultimately, though, she thinks her on-campus experience will pay off.

“The in-campus program is more expensive, but you’re getting also the experience of living in Boston, interacting with people from MIT that might not be in supply chain but might be in like the business school and like other types of departments,” she says.

What’s in it for schools: getting the best applicants

You might be wondering what MIT gets out this arrangement.

Admissions officers here say they’ll weigh applicants’ performance in these online courses.

Anant Agarwal, an MIT professor and CEO of the online-learning platform edX that makes these online courses possible, sees it all as a way to filter the applicant pool.

“When you get applications from people all over the world, it’s often a crap-shoot,” he says. “You don’t know the veracity of the recommendation letters or the grades. And so you’re taking a bet very often.”

And Agarwal says that should give MIT and other institutions a better sense of how students will perform — if they’re lucky enough to get in.

Tags: #mscedc
February 15, 2017 at 10:04PM
Open in Evernote

Article: A School Librarian Caught In The Middle of Student Privacy Extremes


February 8, 2017 | By Gennie Gebhart

A School Librarian Caught In The Middle of Student Privacy Extremes

As a school librarian at a small K-12 district in Illinois, Angela K. is at the center of a battle of extremes in educational technology and student privacy.

On one side, her district is careful and privacy-conscious when it comes to technology, with key administrators who take extreme caution with ID numbers, logins, and any other potentially identifying information required to use online services. On the other side, the district has enough technology “cheerleaders” driving adoption forward that now students as young as second grade are using Google’s G Suite for Education.

In search of a middle ground that serves students, Angela is asking hard, fundamental questions. “We can use technology to do this, but should we? Is it giving us the same results as something non-technological?” Angela asked. “We need to see the big picture. How do we take advantage of these tools while keeping information private and being aware of what we might be giving away?”

School librarians are uniquely positioned to navigate this middle ground and advocate for privacy, both within the school library itself and in larger school- or district-wide conversations about technology. Often, school librarians are the only staff members trained as educators, privacy specialists, and technologists, bringing not only the skills but a professional mandate to lead their communities in digital privacy and intellectual freedom. On top of that, librarians have trusted relationships across the student privacy stakeholder chain, from working directly with students to training teachers to negotiating with technology vendors.

Following the money

Part of any school librarian’s job is making purchasing decisions with digital vendors for library catalogs, electronic databases, e-books, and more. That means that school librarians like Angela are trained to work with ed tech providers and think critically about their services.

“I am always asking, ‘Where is this company making their money?’” Angela said. “That’s often the key to what’s going on with the student information they collect.”

School librarians know the questions to ask a vendor. Angela listed some of the questions she tends to ask: What student data is the vendor collecting? How and when is it anonymized, if at all? What does the vendor do with student data? How long is it retained? Is authentication required to use a certain software or service, and, if so, how are students’ usernames and passwords generated?

In reality, though, librarians are not always involved in contract negotiations. “More and more tech tools are being adopted either top-down through admin, who don’t always think about privacy in a nuanced way, or directly through teachers, who approach it on a more pedagogical level,” Angela said. “We need people at the table who are trained to ask questions about student privacy. Right now, these questions often don’t get asked until a product is implemented—and at that point, it’s too late.”

Teaching privacy

Angela wants to see more direct education around privacy concepts and expectations, and not just for students. Teachers and other staff in her district would benefit from more thorough training, as well.

“As a librarian, I believe in the great things technology can offer,” she said, “but I think we need to do a better job educating students, teachers, and administrators on reasons for privacy.”

For students, Angela’s district provides the digital literacy education mandated by Illinois’s Internet Safety Act. However, compartmentalized curricula are not enough to transform the way students interact with technology; it has to be reinforced across subjects throughout the school year.

“We used to be able to reinforce it every time library staff worked with students throughout the year,” Angela said, “but now staff is too thin.”

Teachers also need training to understand the risks of the technology they are offering to students.

“For younger teachers, it’s hard to be simultaneously skeptical and enthusiastic about new educational technologies,” Angela said. “They are really alert to public records considerations and FERPA laws, but they also come out of education programs so heavily trained in using data to improve educational experiences.”

In the absence of more thorough professional training, Angela sees teachers and administrators overwhelmed with the task of considering privacy in their teaching. “Sometime educators default to not using any technology at all because they don’t have the time or resources to teach their kids about appropriate use. Or, teachers will use it all and not think about privacy,” she said. “When people don’t know about their options, there can be this desperate feeling that there’s nothing we can do to protect our privacy.”

Angela fears that, without better privacy education and awareness, students’ intellectual freedom will suffer. “If students don’t expect privacy, if they accept that a company or a teacher or ‘big brother’ is always watching, then they won’t be creative anymore.”

A need for caution moving forward

Coming from librarianship’s tradition of facilitating the spread of information while also safeguarding users’ privacy and intellectual freedom, Angela is committed to adopting and applying ed tech while also preserving student privacy.

“I am cautious in a realistic way. After all, I’m a tools user. I know I need a library catalog, for example. I know I need electronic databases. Technologies are a necessary utility, not something we can walk away from.”

As ed tech use increases, school librarians like Angela have an opportunity to show that there is no need to compromise privacy for newer or more high-tech educational resources.

“Too many people in education have no expectation of privacy, or think it’s worth it to hand over our students’ personal information for ed tech services that are free. But we don’t have to give up privacy to get the resources we need to do good education.”

Tags: #mscedc
February 08, 2017 at 10:39PM

Article: Boston Dynamics adds wheels to its already chilling robots

Boston Dynamics adds wheels to its already
chilling robots

John Mannes/01 Feb 2017

Alphabet subsidiary Boston Dynamics doesn’t have much to prove when it comes to producing the robots of your nightmares. Previous iterations of the company’s prototypes have been kicked over by humans only to stand right back up, for example. But at an event this week, founder Marc Raibert managed to unveil something simultaneously more unsettling and technologically impressive.

Going by the name of Handle, the new bot features both legs and wheels. The creation, captured on video by DFJ’s Steve Jurvetson, is said to be more efficient than a purely legged robot. Even with a small footprint, large loads don’t seem to be a problem for the robot. Its ability to “handle” objects is where the inspiration for its name originated.

A combination of hardware and software enable the robot to balance itself and throw its weight around, even when rotating rapidly on wheels. It can even jump over objects. In the video above, at about 4:15, you can see Handle extend its arms during an extended spin for balance.

Tags: #mscedc
February 01, 2017 at 10:39PM
Open in Evernote

How Video Games Satisfy Basic Human Needs – Facts So Romantic – Nautilus

How Video Games Satisfy Basic Human Needs – Facts So Romantic – Nautilus

How Video Games Satisfy Basic Human Needs

Posted By Simon Parkin on Jan 04, 2017

“Mass Effect: Andromeda” | Image from IGN / Bioware / YouTube

Grand Theft Auto, that most lavish and notorious of all modern videogames, offers countless ways for players to behave. Much of this conduct, if acted out in our reality, would be considered somewhere between impolite and morally reprehensible. Want to pull a driver from her car, take the wheel, and motor along a sidewalk? Go for it. Eager to steal a bicycle from a 10-year-old boy? Get pedaling. Want to stave off boredom by standing on a clifftop to take pot shots at the screaming gulls? You’re doing the local tourism board a favor. For a tabloid journalist in search of a hysteric headline, the game offers a trove of misdemeanors certain to outrage any non-player.

Except, of course, aside from its pre-set storyline, Grand Theft Auto doesn’t prescribe any of these things. It merely offers us a playpen, one that, like our own cities, is filled with opportunities, and arbitrated by rules and consequences. And unless you’re deliberately playing against type, or are simply clumsy, you can’t help but bring yourself into interactive fiction. In Grand Theft Auto, your interests and predilections will eventually be reflected in your activity, be it hunting wild animals, racing jet-skis, hiring prostitutes, buying property, planning heists, or taking a bracing hike first thing in the morning. If you are feeling hateful in the real world, the game provides a space in which to act hatefully. As the philosophers say: wherever you go, there you will be.

For these researchers, incredibly, enjoyment is not the primary reason why we play video games.

For the British artificial intelligence researcher and computer game designer Richard Bartle, the kaleidoscopic variety of human personality and interest is reflected in the video game arena. In his 1996 article “Hearts, Clubs, Diamonds, Spades: Players Who Suit MUDs,” he identified four primary types of video game player (the Killers, Achievers, Explorers, and Socializers). The results of his research were, for Bartle, one of the creators of MUD, the formative multiplayer role-playing game of the 1980s, obvious. “I published my findings not because I wanted to say, ‘These are the four player types,’” he recently told me, “but rather because I wanted to say to game designers: ‘People have different reasons for playing your games; they don’t all play for the same reason you do.’”

Bartle’s research showed that, in general, people were consistent in these preferred ways of being in online video game worlds. Regardless of the game, he found that “Socialisers,” for example, spend the majority of their time forming relationships with other players. “Achievers” meanwhile focus fully on the accumulation of status tokens (experience points, currency or, in Grand Theft Auto’s case, gleaming cars and gold-plated M16s).

Our disposition can often be reflected in our choice of character, too. In online role-playing games, for example, players who assume the role of medics, keeping the rest of the team alive in battle will, Bartle found, tend to play the same role across games. “These kinds of games are a search for identity,” he said. While players sometimes experiment by, for example, playing an evil character just to see what it’s like, Bartle found that such experiments usually lead to affirmation rather than transformation. “Basically,” he said, “if you’re a jerk in real life, you’re going to be a jerk in any kind of social setting, and if you’re not, you’re not.”

In a 2012 study, titled “The Ideal Self at Play: The Appeal of Video Games That Let You Be All You Can Be,” a team of five psychologists more closely examined the way in which players experiment with “type” in video games. They found that video games that allowed players to play out their “ideal selves” (embodying roles that allow them to be, for example, braver, fairer, more generous, or more glorious) were not only the most intrinsically rewarding, but also had the greatest influence on our emotions. “Humans are drawn to video and computer games because such games provide players with access to ideal aspects of themselves,” the authors concluded. Video games are at their most alluring, in other words, when they allow a person to close the distance between how they are, and how they wish to be.

“It’s the very reason that people play online RPGs,” Bartle said. “In this world we are subject to all kinds of pressures to behave in a certain way and think a certain way and interact a certain way. In video games, those pressures aren’t there.” In video games, we are free to be who we really are—or at least find out who we really are if we don’t already know. “Self-actualization is there at the top of Maslow’s Hierarchy of Needs, and it’s what many games deliver,” Bartle added. “That’s all people ever truly want: to be.”

Not every game, however, allows us to act in the way that we might want to. The designer, that omniscient being who sets the rules and boundaries of a game reality, and the ways in which we players can interact with it, plays their own role in the dance. Through the designer’s choices, interactions that we might wish to make if we were to fully and bodily enter the fiction are entirely closed off. We may be forced to touch the world exclusively via a gun’s sights. There is no option in many video games to eat, to love, to touch, to comfort, or any of the other critical verbs with which we live life.

The crucial role of the designer in deciding the rules of how we can be in their game can be vividly seen in Undertale, a critically lauded roleplaying game from 2015 which subverted its genre by allowing players to befriend the game’s monsters, not just stab at them with swords. The game’s creator, Toby Fox, is reticent to overstate to what degree a player’s choices in his game reveal their personality. “I think a person saying, ‘I love Undertale,’ tells you more about the person than the routes they took in the game,” he told me. Nevertheless, he remains fascinated by the question of why people play the way they do. “I hear things like, ‘I got to the last boss and stopped playing because it was too much pressure,’ or ‘I kept breaking all the pots in that character’s house because I hated the fact that he told me not to.’ That’s valuable information about a person, I think.”

The opportunity for self-expression in role-playing games such as Mass Effect and Star Wars: Knights of the Old Republic, where you must make moral choices in how to act, is clear, even if those choices are often embarrassingly simplistic and binary. (In Mass Effect, for example, the game places your character on a sliding scale between the virtuous “Paragon” and the villainous “Renegade” according to your choices thus far.) But for Fox, competitive games also allow for expressiveness. “In high-level Super Smash Bros.,”—a fighting game in which players assume the role of various Nintendo characters and attempt to knock the color from each others’ pixels —“you have some players that love to play proactively and aggressively, and there some players that play super methodically,” he said.

One’s choice of character in a fighting game may reflect one’s personality (a lithe, offensive avatar versus a slower, more defensive type, for example) but Fox often sees players use characters in ways that reflect their individual play style, rather than that which is encouraged by their chosen avatar’s strengths. “One of the best ways to beat Jigglypuff”—a pink, marshmallow-like character loaned from the Japanese monster-collecting game, Pokémon—“is to play very defensively,” he told me. “But Mango, one of the best professional Super Smash Bros. players often refuses to play that way against Jigglypuff, even if it means losing. Why? Because if he’s going to win, he wants to win being honest to himself. The way he plays is representative of who he is.”

This sort of anecdote suggests that self-determination, the theory that seeks to explain the motivation behind choices people make without external influence and interference, holds in video games as in life. The authors of a 2014 paper examining the role of self-determination in virtual worlds concluded that video games offer us a trio of motivational draws: the chance to “self-organize experiences and behavior and act in accordance with one’s own sense of self”; the ability to “challenge and to experience one’s own effectiveness”; and the opportunity to “experience community and be connected to other individuals and collectives.”

For these researchers, incredibly, enjoyment is not the primary reason why we play video games. Enjoyment is not the primary motivation—“it is rather,” they wrote, “the result of satisfaction of basic needs.” Video game worlds provide us with places where we can act with impunity within the game’s reality. And yet, freed of meaningful consequence, law abiders continue to abide the law. The competitive continue to compete. The lonely seek community. Wherever we go, there we will be.

Simon Parkin is the author of Death by Video Game: Danger, Pleasure, and Obsession on the Virtual Frontline, and has written essays and articles for various publications, including the new, the Guardian, the Times, MIT Technology Review, and the New Statesman.

Tags: #mscedc
January 25, 2017 at 06:33PM
Open in Evernote

Article: Why paper is the real ‘killer app’

Article: Why paper is the real ‘killer app’

BBC Capital

Why paper is the real
‘killer app’

By Alison Birrane/23 Jan 2017

With apps taking over our lives, there’s a movement afoot as people yearn for simpler, technology-free times.

Every January, Angela Ceberano sets goals for the 12 months ahead. And on Sunday nights, she plans and organises the coming week.

Sometimes, I just want to get rid of all the technology and just sit down in a quiet space with a pen and paper

But instead of spreadsheets and fancy smartphone apps, the Melbourne, Australia-based founder of public relations firm Flourish PR, uses notepads, coloured pens and a stack of magazines. With these, she brainstorms, makes lists and creates two vision boards: one to manage her private life, and one with her team.

Sales of stationery have boomed in no small part due to the popularity of ‘bullet journaling’ (Credit:

Ceberano is anything but a technophobe. A digital native with a strong social-media presence, she splits her time between traditional and new media, and between Australia and San Francisco, where some of her start-up clients are based. For certain tasks, she just prefers the simplicity, flexibility and tactility of the page.

“Sometimes, I just want to get rid of all the technology and sit down in a quiet space with a pen and paper,” she says. “There are so many apps out there and I feel like no one app gives me everything that I need. I’ve tried and really given them a go, doing those to-do lists of having your priorities or brain storming using lots of different apps … [but] when I get a pen and paper, or when I’m using my old-fashioned diary and pen, it just feels more flexible to me. I can always pull it out. I can focus.”

Tactile sensory perceptions can stimulate parts of brain that associated with creativity and innovation (Credit: Getty Images)

She’s not alone. A quick scan of social media illustrates a quiet return to the humble charms of stationery and lettering. Many people are using cursive writing and colouring in to help organise their lives or work on certain goals — whether it’s fitness, finances, or fast-tracking their careers. And, despite the proliferation of apps, other back-to-basics ideas have gained popularity online.

The science behind it

Science suggests these traditional types might be on to something. While technology can certainly provide an edge for certain tasks, neuroscience is gleaning that digital overload is a real and growing concern. A 2010 study by the University of California at San Diego suggests we consume nearly three times as much information as we did the 1960s. And a report by Ofcom in the UK says that 60% of us consider ourselves addicted to our devices, with a third of us spending longer online each day than we intend. So are we doing too much, and are our screens too distracting? Possibly. For instance, many studies indicate that multitasking is bad for us and makes our brains more scattered.

People who doodle can better recall dull information

Other findings show that pen and paper have an edge over the keyboard. Research by Princeton University and the University of California at Los Angeles, published in 2014, showed that the pen is indeed mightier than the keyboard. In three studies, researchers found that students who took notes on laptops performed worse on conceptual questions than students who took notes longhand. Those who took written notes had a better understanding of the material and remembered more of it because they had to mentally process information rather than type it verbatim. And, another study, published in the Journal of Applied Cognitive Psychology, showed that people who doodle can better recall dull information.

Jotting it down

Certainly, the concept of goal setting without technology isn’t new. It’s the way anyone did anything pre-Internet.

Amy Jones started selling goal-tracking art after a visual aid helped her pay off $26,000 in debt. Each swirl on her canvas represented $100 paid off (Credit: Map Your Progress)

The difference now is that there’s a return to traditional techniques by the digitally savvy. Many are successful vloggers, work in tech, or are experts in new media. And this latest trend has helped boost sales of stationery like Moleskine and Leuchtturm1917 notepads, the companies say. For its part, Moleskine has seen double-digit growth annually over the past for years, according to Mark Cieslinksi, president of Moleskine America. Leuchtturm1917 marketing manager Richard Bernier says it was about June 2016 when sales went viral, due in no small part to the popularity of bullet journaling, a popular form of list-making, amongst the online community.

The new self-awareness

So, with the proliferation of technology specifically designed to aid productivity and efficiency, what’s the enduring appeal of simpler tools? For starters, a notepad will never run out of batteries or have a screen freeze half way through a task. You can’t accidentally delete something. It won’t ring, or ping or pester you with constant social-media and email updates. And you can sketch, draw a diagram or stick-figure illustration — sometimes a picture is worth a thousand words — which isn’t as easily done on a smartphone.

For Amy Jones, creator of Map Your Progress, which involves goal-tracking through art, creating a visualisation helped her pay off $26,000 in debt. Inspired by the visual aids used by her mother, who worked in sales, Jones drew up a huge canvas of swirls, each representing $100, and hung it on the wall. Each time she paid that amount off, Jones, who lives in San Diego in the US, coloured one in with a brightly toned hue. The result? She paid off her debt in half the expected time and created an impressive artwork.

“I was surprised by how effective it was, at how satisfying it was to colour those things in,” Jones says. “I could take each one of these swirls and see the progress blooming in colour on my wall, then that motivated me to make different decisions. And so I was more aggressive about paying off the debt than I would have been otherwise.”

New York-based digital product designer Ryder Carroll created the Bullet Journal, a method of note-taking and list-making, out of a personal need (Credit:

After posting about her success on Facebook, the idea took off. She started selling her designs, known as Progress Maps, online in 2015 with customers in countries as far flung as Australia using them to stay focussed on goals such as clearing debt, losing weight or training for a marathon.

“There’s almost a little bit of ceremony to it as well. People get really excited. They look forward to colouring in that swirl. It becomes something more than just swiping your finger on an app, or filling in a cell on a spreadsheet. It’s more of an experience.”

Similarly, New York-based digital product designer Ryder Carroll created the Bullet Journal, a method of note taking and list making, out of a personal need. “What you see now is the culmination of a lifetime of me trying to solve my own organisational problems, all of which stem from being diagnosed with ADD when I was very young,” he says. “A big misconception is that we can’t pay attention. But in my experience, we can pay attention, except you’re paying the attention to too many things at the same time. So I had to figure out a way to, in short bursts, capture information and also figure out how to be able to listen.”

Of the Bullet Journal, he says, “it was designed for me, but it was also designed for my kind of mind, which had to be flexible. Sometimes I use it to draw, sometimes I use it to write, sometimes it would be for planning, sometimes it would be for ‘whatever’ and I wanted a system that could do all those things.”

‘Getting your hands dirty’

Writing it down also sparks innovation. Being innovative and creative is about “getting your hands dirty” a feeling that is lacking when you use technology or gadgets, says Arvind Malhotra, a professor at the University of North Carolina Kenan-Flagler Business School.

A return to traditional techniques by the digitally savvy has helped boost sales of stationery (Credit: Getty Images)

“Research has also shown that tactile sensory perceptions tend to stimulate parts of brain that are associated with creativity. So, touch, feel and the sensation you get when you build something physical has also got a lot to do with creativity,” he says.

In almost all the high-technology companies, those that make digital hardware and software, whiteboards are still a dominant method for creative stimulation and collaborating

“My own research on fast prototyping reveals that even in the digital age, innovation is sparked when you complement the digital with physical,” Malhotra says. It’s the reason many technology firms love whiteboards, he says.

In almost all high-technology companies, whiteboards are still a dominant method for creative stimulation and collaborating (Credit: Getty Images)

“Nearly 80% of the physical workspaces I have observed, that are considerably creative in their output, use whiteboards,” he says. “What is really interesting is that in almost all the high-technology companies, those that make digital hardware and software, whiteboards are still a dominant method for creative stimulation and collaborating.”

Back to basics

For Ceberano, being able to switch off her phone, step away from the computer, sit down and focus is key, along with the flexibility to create her own systems.

Organisation apps will always represent “someone else’s format,” says Angela Ceberano, founder of Flourish PR (Credit: Flourish PR)

“You can get caught up in this stream of technology and actually it’s always on someone else’s terms,” she says. “With those apps, the reason I don’t use them is because they are someone else’s format. It’s not they way my mind thinks,” Ceberano says. “So when I’m there with a pen and paper, I’m putting it down in a way that is very organised in my head, but probably wouldn’t work for somebody else. … I think people are just trying to take back ownership over the time that they’ve got and also the way that we’re controlling the information that we’re taking in.

Tags: #mscedc

January 23, 2017 at 11:37AM