Google I/O, Until Next Year

Share if the site was helpful

Google I/O, Until Next Year

Well Google I/O is a wrap everyone, and if you tuned in then hopefully you left with some cool takeaways.  If not then don’t worry, that’s what we’re here for.  When the conference first kicked off we wrote about day 1 and its highlights, but obviously the fun didn’t stop there.  The following is a crash course selection of (in my opinion) the most important and amazing takeaways for and android junky.

Flutter:

If you’re an android developer, then your undoubtedly familiar with Java and segueing in to Kotlin.  You might not have heard about Flutter before though.  It’s Google’s mobile app SDK for easily creating high quality apps on both Android and iOS devices.  Written in Dart (a language developed by Google as well), Flutter works with existing code and is used to develop at ridiculously high speeds.  Here’s a great video from Google I/O that goes more in depth on how to use Flutter to enhance your material design.

Duplex:

Now this one blows my mind, and I know I’m not alone here.  When you think of a sci-fi future it’s reasonable if computers playing our personal secretaries pops into your mind.  This seems to be the present now. 

I’m very interested to see how Duplex functions successfully in real world applications, but the Google 2018 keynote showed a quick performance of the Google Assistant booking a haircut appointment for Google’s CEO, Sundar Pichai.  From an outsider’s view the conversation was impossible to distinguish from an everyday conversation between two people, and when it was done the Google Assistant confirmed to Sundar that the appointment had been booked and added to his calendar.  It’s only a matter of time before this is both client and server side so that duplex will be having conversations with itself to schedule our days, and that’s pretty wild.

Android P Beta:

Yes, I know we discussed android P in the last blog on Google I/O.  But you’ll have to bear with me because it’s happening again! As of this week the Android P beta is available on Pixel devices as well as 7 other flagship devices.  Android P brings all kinds of cool new features to the table.  A lot of these revolve around predicting what you the user are about to do.  There’s an adaptive battery that adjusts your screen’s brightness and what apps are running in an effort to both improve your experience and conserve precious battery power.

My personal favorite feature of P is Wi-Fi RTT.  Round Trip Time takes our current location services capabilities and amplifies them.  Essentially by triangulating between multiple Wi-fi access points nearby, a user’s position can be calculated within about a meter.  Just use your imagination for what applications this could come in handy for!  For more on Android P you can read our past posts or watch some Google I/O talks.

There’s lots more to take away from Google I/O, and honestly I’m cutting myself off here because otherwise I’d end up writing a paragraph or two about every session that I watched from the entire conference.  It’s a great year to be an android developer or even just own an android device.

What interested you the most from the conference?  Let us know in the comments below!

 

Google I/O Is In!

Share if the site was helpful

Google I/O Is In!

We’ve talked about Google I/O being on the horizon here before, but we can do that no longer.  It’s here! (Actually once it’s over we’ll probably immediately start writing about 2019’s event).

Yes, today marks the kickoff of Google’s 11th annual conference.  And as such the entire Android population has a lot of stuff to talk about.  Google I/O started off strong with its keynote mapping out some of the things to be discussed this year.  Here are some of the highlights of day one:

Artificial Intelligence:

As with most other places these days, AI was one of the most used buzzwords at day one.  It’s somewhat become an all encompassing term for any technological advancement that helps us.  Despite this, Google separates itself from the pack by bringing some pretty cool new features to the table.  Whether it’s self-writing emails or auto adjusting screen brightness to your preference, Google is working on slipping AI into every part of our days.

Actually it’s so much cooler than that.  In the video above at 3:10 you can watch the Google Assistant play as your personal secretary.  It makes a call to a local hair salon and books an appointment without the person on the other end ever realizing they’re talking to AI.  Scary cool.

Android P:

There’s been lots of hype about Android P in the past few months, and we got to see more today.  With it’s 3 key themes of Intelligence, Simplicity, and Digital wellbeing, Android P seeks to one up everything else already in your hand and provide a predictive, pleasant experience.  We’ve talked before about some of the new features coming with Android P, and today that list only gets longer.

Adaptive Battery is a feature aimed to conserving battery life by using (you guessed it) AI.  It studies your app usage patterns and then can dedicate more battery power to conserving the things that you will likely be using in the near future.  Along with this comes the Adaptive Brightness feature I mentioned above where your screen will auto-adjust given your preferences.

Not only does P look to alleviate your battery strain under the hood, but it uses its predictive analytics to bring apps you’re about to use to the forefront.  P is currently available on a select few devices (9 total), and if you’re interested in downloading it click here.  If you’re unsure what you’re doing and want support with flashing your phone, then check out our Smartphone Tech Course over at Phonlab.  Otherwise stay tuned and we’ll post a guide in the near future.

Augmented Reality:

As for the other big buzzword topic, Augmented Reality had some cool new features to display.  Maps have been souped up with the newest computer vision features to recognize where you’re looking in the real world and flash both directional arrows for guidance as well as information about local places.  If you’re walking down the street and a restaurant catches your eye, say goodbye to opening up yelp and searching for its reviews.

The camera has also become greatly enhanced with its new capability to recognize where things are in the real world in terms of depth perceptions.  Moving your phone around your room, office, or down the street you’re able to get live estimates of how far away things are.  This is sure to be crucial in a lot of coming apps.

There’s a lot more to come in this year’s Google I/O, and we’ll keep you updated here.  Is there anything in particular you want us to go more in depth on?  Comment below and we’ll give you all the info you could dream of!

 

 

 

 

Reverse Engineering Apps. A Primer

Share if the site was helpful

Reverse Engineering Apps.  A Primer

Reverse engineering is a pretty cool concept.  Someone builds something, you want to see how they did it, so you take it apart and see how it was put together in the first place.   It can be a great way to learn, and it pushes technological progress forward.  But there’s also a dangerous side to it.

Reverse engineering done with malicious intent can lead to copyright infringement or other damages.  It’s a fine line to walk on for what is ethical and what isn’t, and that doesn’t change inside of the Android world.  In here reverse engineering is common and developers should always account for it when building apps to make sure they’re taking necessary precautions.

The term for reverse engineering an app is “decompiling”, and what you’re decompiling is an APK (Android Package Kit).   This is essentially just a .zip file that stores our apps code.  You build an APK when you compile your code and use that APK to upload the app onto the Google Play Store.  This is then what users around the world download onto their devices.  And if they’re tech-savvy enough, they can open up this APK and see what’s inside.

Why Decompile?

Let’s take a second to think about a couple reasons why we would want to decompile our APKs.  One possibility is that we’ve misplaced our source code and are hoping to recover it.  If this was the case then we could decompile our app from a phone it was already on. Note that this has its limitations as the decompiled code will not be the exact same as the original.  Some parts will be lost along the way, so make sure you save your code on Github!

Another possible reason for decompiling an app would be to evaluate its security.  If you’re able to see things you want to keep private simply by decompiling an app, other people can too.  And chances are they won’t always be decompiling for education purposes.  I’ll be following up on this blog shortly with another one going more in depth on how to properly hide secrets in your apps.

And of course there’s always decompiling for modding purposes. If you reverse engineer an app and put it back together how you want then you can add new features or customize how things behave.  Here’s where I throw in a disclaimer that you should make sure you’re a law abiding citizen while doing these things.  Lots of companies/developers would be very unhappy to hear that someone is decompiling their apps to make monetary gains.

How To Decompile?

The good news is that if you want to decompile apps on your own, you absolutely can!  You’ll need to download a popular tool known as apktool, and also make sure you have java set up on your computer.    Here’s an great video showing how to use apktool to theme and edit android apps.

 

Want to know more about decompiling apps?  Don’t worry we’ll be writing lots more on it soon, but in the mean time let us know what you want to know in the comments below!

Android’s Developer Website Just Got A BIG Makeover

Share if the site was helpful

Android’s Developer Website Just Got A BIG Makeover

If you’ve ever thought about developing for Android then chances are you’ve at least stumbled upon developer.android.com.  And chances are you left with a bitter taste in your mouth.  Fear not, things are looking up.

I remember my first time looking at Android’s developer docs.  I was a novice developer and as such the website was chock-full of useful information, but it seemed borderline impossible to navigate.  Countless topics linked into one another describing the different components of an app.  Couple this with all the attributes listed for each subject, and your brain quickly starts to spin.

What’s New?

I’ve discussed this navigation difficulty with others before, and that’s why I was so happy to hear the website just got a makeover.  First off, it looks much better.  Whitespace is used to give the new layout a sleeker more aesthetic look while the landing page emphasizes a preview for Android P.  Scrolling down from there the home page is neatly divided.  Sections for featured topics, material design, and where to begin your development journey pave the path.

But, of course, there’s much much more to this website than how it looks.  The most important thing is that someone who finds themselves here actually learns about what they’re looking for.  The new website does a much better job of guiding users who are in uncharted territory.  Selecting “Docs” in the top banner takes users to this page.

In here the core developer topics that every android programmer NEEDS to know about are listed.  Clicking on each of these links will take the user to a simple explanation accompanied with an intro video.  Then immediately below these are trees of related/more in-depth topics.  The result is an easy cursory explanation of each topic and then more complicated explanations for those that want to learn more.

Material Design and More

The website has tons of sections and features, but one other one I’d like to highlight is the “Design & Quality” tab.  It’s important to remember that there’s more to developing that just creating sound logic.  Users of your apps also have come to expect high quality layouts and design patterns.  This section of the website helps explain to developers how to wow users with apps that know what they want before even they do.

In summary, the old developer website was certainly useful, you just needed to know what you were looking for.  The new model offers a much easier guide for new entrants. It takes them by the hand and shows them both what topics are easy to comprehend and what fundamentals should be learned first.  Overall I think the new website is a vast improvement to its somewhat clunky counterpart, and I look forward to using it as my development journey continues.

Have you check out the new site and feel that it’s still missing anything?  Let us know in the comments below!

Android Security Is Still Secure. Seriously.

Share if the site was helpful

Android Security Is Still Secure. Seriously.

There’s been a lot of media hype this past month about Android phones and their lack of security.  Headlines such as “How Android Phones Hide Missed Security Updates From You” have been floating around causing mass panic.

Take a deep breath.  It’s ok.

Despite the plethora of recent articles claiming that Android phones are under attack and that you’re a victim, chances are you’re actually safer than you think.  Yes there was a study earlier this month that found some phones were behind on their security updates.  But that doesn’t mean that all of your data is exposed to whoever wants to take it.  Even with a few security updates missing, you should be alright.  Let’s take a second to discuss some of the other security features that Android architecture has in place to protect you:

Google Play Protect

Google Play Protect is a safeguard to protect Android users from malicious apps.  Even with Google’s screening process to let apps onto the Play Store, chances are some baddies will slip through the cracks and are available for download.  Google Play Protect attempts to stop these apps in their tracks by doing routine scans on your phone for every app even after it’s been installed.  If there’s a cause for concern detected, you’ll be notified. 

This software also applies to apps updates, so the short version of it is that apps can’t just slide by once. As long as you have Play Protect enabled on your phone, apps are continuously exposed to it.  Chances are your phone already has Play Protect, but if you want to be sure (or just see what it’s been up to) you can find it in the Play Store.  Open the store and then tap the 3 horizontal bars menu icon.  Then select “Play Protect” and you’ll be taken to a page showing what apps have been scanned recently and how your device looks.

Sandboxing

Android apps are naturally sandboxed from one another.  What this means is that each apps data and code execution is isolated from others.  So if you happen to download the wrong app it doesn’t mean it will automatically have access to all of the apps already on your phone.  We go into depth about the android security framework in our Android development course over at Phonlab.  Content Providers offer a storage mechanism for apps so that their information has to be requested before it can become accessible to just anyone.

Android Permissions work along with this to make sure that no matter what if you have some common sense you should be safe.  Permissions essentially are requirements that if an app utilizes a certain feature (such as syncing with your contacts) it has to be granted permission by the user.

These permissions are presented to a user when the app attempts to access them, and are only allowed when the user says so.  You retain complete control over what access an app has.  Imagine you downloaded a game and it started asking you for access to your contacts and your saved media files.  Red flags should be going up right away since a game has no reason to use these.  As long as you don’t blindly hit accept to every permission, you retain a ton of control over what an app can actually do.

What are your thoughts on Android’s security measures?  Let us know in the comments below!

Counting down to Google I/O 2018

Share if the site was helpful

Counting down to Google I/O 2018

 

Google I/O is just around the corner.  Developers and Android users around the world are gearing up to see what’s in store for the coming year.  Theories about what the annual conference will entail are floating about, and Google’s recently updated its event schedule.  With last years discussions about AI, VR, and Android O things are sure to be interesting.

So what’s on the agenda for this year?

Android P

We’ve talked about Android P before here at RootJunky.com, and it’s sure to be discussed in a little more detail at the conference.  P (currently Pistachio Ice Cream) was first released as a developer preview at the beginning of March.  It’s featured things such as an improved notification system, notch support, and triangulated position with Wi-Fi for incredibly accurate positioning.  It’s expected that Google will launch a beta program for any interested users soon (and maybe give a few more hints to the upcoming name).

AI

Artificial Intelligence was all the rage at last year’s conference with Google Lens allowing users to scan real life objects and receive information.  Couple this with Google Assistant and Google Home improvements, and AI seems to be at the forefront of every new technological movement. 

Google Assistant appears quite a few times in the current schedule, so it’s sure to be a big discussion topic.  Assistant is already loaded with tons of features, but it would be silly to leave it as is.  One session is titled “Design Actions for the Google Assistant: beyond smart speakers, to phones and smart displays”.

Assistant could be expanding past voice interactions and into visual cues.  Along with the fact that improvements involve allowing 3rd party app integration, there could be some seriously cool possibilities if the creativity door is open for developers to allow their apps to prompt the Assistant to take action.  Notice how vague I’m being?  It’s because of how open ended these features really could get if the connection is bridged.

AR/VR

In February Google officially released v1.0 of ARCore, the mixed reality development platform, allowing developers to easily integrate Augmented Reality into their apps (way more exciting than I just made it sound).  Our tutorial series shows how to integrate AR into your first app, but ARCore’s potential goes much deeper than what we cover.  I wouldn’t be surprised if plans to improve this platform and potentially incorporate it with Google Lens are underway.

Looking over the current schedule, tons of other topics will be covered in the upcoming conference.  I’ll be one of the many that don’t attend but tune into what I can online.  I’d highly suggest you do the same to stay on top of what’s new in the development world.  Or if you’d prefer, we’re sure to highlight the big parts here.  Stay tuned!

Malicious Apps: Mining Your Own Business

Share if the site was helpful

Malicious Apps: Mining Your Own Business

Whether you know it or not, you may be an investor in bitcoin.  Ok, that’s not entirely true.  But your phone may have helped someone else mine it without your consent.

Researchers at Kaspersky Lab, a cybersecurity company, have recently found multiple “mining” apps on the Google Play Store that are disguised otherwise.  Apps hiding under the mask of games or streaming apps have secretly been using smartphone processors to mine cryptocurrency without the user’s knowledge.

Mining in Smartphones

Thanks to its recent news hype, most people are familiar with the concept of cryptocurrencies such as Bitcoin and how it’s mined.   There’s no physical digging, but instead users are rewarded the currency in return for processing transactions and updating the blockchain ledger.  And since processing transactions takes hardware and electricity, the more technology you have at your disposal, the more currency you can earn.  This has resulted in giants entering the business and consolidating massive amounts of hardware in warehouses.

Smartphone processors are not as powerful as their desktop counterparts, but when one app is able to tap into thousands of them, the results are still significant.   Kaspersky Labs has found multiple apps with this affliction, some of which have been downloaded more than 100,000 times.  Some of these apps are even programmed to keep tabs on how much processing power their using so as to easily fly under the radar of the average user.

Google’s response

Google has since removed the known abusers of this tactic, but it’s hard to say how many apps are in public hands right now doing the same thing.  It also seems that the betrayal of trust isn’t the only underlying issue here.  Recently Google announced that it would remove any and all mining extension in the Chrome Web Store, regardless of if users were aware of what they were doing or it the extensions were legitimate.  The question remains whether this policy will expand into the Google Play Store, but I think it’s safe to assume it’s only a matter of time before it does.

And until then the question becomes how to avoid these kinds of apps.  Right now from a development standpoint there are no permissions that must be accounted for in relation to mining, so there doesn’t seem to be much security that can block these kinds of apps (other than mindful downloading).

What are your thoughts on your phone being used as a mining tool without your consent or knowledge?  Do you have any thoughts on how to prevent this?  Let us know in the comments below!

Building Your First Augmented Reality App Pt. 2

Share if the site was helpful

Building Your First Augmented Reality App Pt. 2

Welcome to part two of this Augmented Reality tutorial.  Now that we’re done with the boring set-up process, we’re ready to dive into Unity and see a final product!

As a quick review, in part 1 you created a Vuforia account and a license key.  Then on Vuforia’s website you created a database to hold your image target (the dollar bill) and downloaded that along with our 3D elephant.  Finally you downloaded/opened Unity (our game editor) and changed the build settings to Android.  Ok, now let’s continue from there:

Setting our Package Name:

Remember how last time we opened “Player Settings” and then the tab that said “XR Settings”?  Well there are a few more small things we’ll have to do in this section.  Instead of “XR Settings” open up the “Other Settings” tab.  Every app that is published on the Google Play Store needs a unique ID so that it doesn’t get mixed up with other apps.  So while you may see two apps with the same name, under the hood their ID’s are different.  This ID is known as the app’s package name.

In our “Other Settings” tab we’ll write what we want our package name to be.  This can be whatever you want, but I’ll use “com.rootjunky.vuforiaelephant”.  Now my app has an ID and Unity will be able to run it on any mobile phone.  Also go ahead and uncheck the box “Android TV Compatibility”, since this app won’t work on Android TVs.

Now to import all of our materials from the first post.  In your Unity project you should see a section for the Project hierarchy.  This shows all the files/resources in the project, and we’ll be storing everything inside the folder named Assets.  We already have our Vuforia files in here, and to get everything else into this folder you can click and drag the following into the Assets folder:

  1. The Vuforia database you downloaded
  2. The 3D elephant model

Creating the Scene:

Once all of these assets are together we can begin messing with our scene.  Go to “file” then “Save Scene”, and save this scene as main (we can think of scenes as different parts of our game, but we only need one for this project).    Now inside of our scene’s hierarchy right click on the Main Camera and delete it.  Then click “Create” -> “Vuforia” -> “AR Camera”.  This will add Vuforia’s custom camera to our scene that takes care of all image targeting (i.e. recognizing dollar bills).

But now that we have the AR camera, we still need to tell it to look for a dollar bill, and to place an elephant on top of that dollar bill once we find it.  To do this select “Create” -> “Vuforia” -> “Camera Image” -> “Camera Image Target”.  If you click on an object in the scene the right-side tab will show details about it, and selecting the Image Target will display a detail section titles “Image Target Behavior”.  In here set Type to “Predefined”, Database to “DollarElephant”, and Image Target to “dollarTarget” (see the following image).

Setting these values connects our database to the image target, so now our camera knows to look for a dollar bill.  But in order to use Vuforia we also need to add our license key.  Make sure you have this still copied to your clipboard, and then selected the AR Camera in the scene.  One of the details you’ll see appear for it is labeled “Vuforia Behaviour”,  In here click the Open Vuforia configuration button and then paste in your App License Key.  Then in the Databases dropdown check the boxes that say “Load DollarElephant Data” and “Activate”.

Displaying The Elephant:

Now for the final step: attaching our elephant.  Find the elephant model inside of your Assets folder (most likely named “source” right now).  Click and drag this little guy onto your ImageTarget in the Hierarchy tab.  This will make the elephant become a “child” of the ImageTarget.

Chances are things look funky though on your screen, and this is because the elephant model is HUGE.  Inside of its Inspector tab we can change its position, rotation, and scale, so lets drop its x, y, and z values for scale down to 0.1.  Then set the position to 0 for the x and z axis, and 0.5 for the y axis (this just raises the elephant a bit so he’s on top of the dollar).

And that’s it!  We’ve attached our Vuforia files to the scene and bound a 3D model to Vuforia’s image target.  With just a few steps we’re now ready to see our augmented reality creation come to life. Connect your phone to your computer (make sure it’s USB Debuggable) and then go to File -> Build Settings again.  Select “Build and Run”, and your game will download onto the connected device.

When the app is up and running point it at any dollar bill, and you’ll see a virtual elephant appear on top.  What’s even cooler is that if you pick up the dollar and move it around the elephant will stay on top.

Congratulations on sticking through this whole process.  It’s very possible you got stuck along the way, and if that’s the case just comment below and I’ll try to help you out.  And if you’re interested in learning more about Android development then you can always check out Phonlab’s course HERE.

Building Your First Augmented Reality App

Share if the site was helpful

We’ve talked before about how influential augmented reality is going to be in the future.  What we didn’t mention is how easy it can be to take part in shaping that future.  Over the course of the next two posts we’ll show how to incorporate AR into an app, and when it’s all said and done we’ll be able to look at a virtual elephant in the real world.

It’s not too complicated as far as subject material goes, but there a couple steps involved so we’ll split this into two pieces: gathering our resources and then putting them into action.

Before we do any work though, let’s take a second to discuss the bigger picture of what we’ll be doing here.  If you’ve ever experimented with game development, then you’ve probably heard of Unity.  If not, then some things in this tutorial may seem a little confusing at first (but far from impossible!).  Unity is a development environment where developers can make 2D and 3D games, and we’ll be using it here to host our augmented reality app.  Click here to download Unity, and when you do make sure that you include the Android/iOS and Vuforia plugins.

We all know about Android and iOS, but odds are Vuforia is a new name.  Vuforia is a popular AR platform that allows us to use image targeting in our apps.  Essentially all we have to do is pick a 3D model and an image.  Vuforia will then root our 3D model to any images it sees in the real world.

For example, in this app we’ll be using a 3D model of an elephant made with Blender, and the image will be a $1 bill.  With this combination, any time our app’s camera finds a dollar bill in the real world, it will place the 3D model on top of it.  The result is the title image of this post.

Ok, that’s enough background.  Let’s jump into the actual set up.  Use the above link to download Unity if you don’t already have it, and then go to developer.Vuforia.com and create an account.  After you’ve made an account click on the develop tab and then click to create a new license key.  You can name this anything you want, but as you can see in this image I chose “VuforiaElephant” as my name.

After creating the license key you’ll be able to click on it and see a string of random characters representing it.  Copy and paste this value; we’ll be using it later in this tutorial. 

We create this license key so that our app in Unity will be able to connect to our Vuforia account.  Now for the second step we’ll need to do create a database inside of Vuforia to hold our dollar bill image.  Change your selection from License Manager to Target Manager and then add a new database.  I’ve named mine “DollarElephant”.  Inside of this database we’ll click “Add Target” to add a new target.  Pull any image of a dollar bill from Google images and add it here.  Then set it’s width value to 5 and give it a name (dollarTarget is just fine).

When you’re done with this click to download the database, and that’s everything we’ll need to do in Vuforia.  Before moving into Unity let’s also get the 3D model of an elephant we want to use.  Click here to download the elephant made by sagarkalbande (and feel free to try this out with a different model).  Save this file onto your computer and now let’s move into Unity.

If you’re feeling overwhelmed right now, don’t worry we’re not going to do much else in this first part.  For now let’s open Unity and create a new project named “VuforiaElephant”.  Go to “File”, then “Build Settings” and select Android as your Platform.  After making this change the little Unity cube should appear next to Android.

Finally inside of the Build Settings window click on “Player Settings” and a bar of options will appear on the right side of your screen showing setting options.  Open the tab that says “XR Settings” and check the box that adds Vuforia Augmented Reality to our project.  Go ahead and import the settings that Unity says it needs to add, and now we’re ready to start the fun stuff.

If you’ve made it this far down the blog, then good work sticking through the dry steps.  We created a Vuforia account, made a license key, and selected a dollar bill as our image target.  Then we downloaded our elephant 3D model and created a new project in Unity.

So now we just have to make the connection inside of our Unity app between the dollar and the elephant.  Stay tuned for the second part of this tutorial in the next few days and we’ll finish out the project so that everyone can have their own virtual elephant! Does app development have you completely lost? Check out Phonlab Android app development classes HERE

From Rocks to Digital Wallets

Share if the site was helpful

From Rocks to Digital Wallets

We started with trading rocks, then gold, then paper, and then credit cards.  Ok, that’s an extreme oversimplification of things, but the point is that over time humans have become more advanced with their trade patterns.  In an ever more digital world it only makes sense that our next step is complete digital assets.

No, I’m not talking about cryptocurrencies like Bitcoin which were all the hype last November (although I’m not ruling those out either).  Instead the spotlight is shifting towards other digital wallets/online payment systems like Google Pay.  In the next few years a huge shift will take place towards these systems.

What is a Digital Wallet?

For those of you unfamiliar with digital wallets, they’re simply an electronic device that allows their user to make digital transactions.  The user can link their bank account if they desire or they can transfer an amount into the wallet and use it from there.  Payments can be made either remotely from their account or with Near-Field Communication (“scanning” your phone at a register).  Last month Google merged two of its payment services (Android Pay and Google Wallet) into one service, and since then steps have been taken to make the service more useable in everyday life.

The most recent stride that gained media attention came last week when the Las Vegas Monorail began accepting Google Pay for transit tickets.  The city’s worked with Google to not only accept payment here but monitor past transactions and map transit for users.  It should be no surprise the long term goal is to implement this in all major cities.

Who uses Google Pay?

NFC systems like Google Pay are already integrated into many of our everyday lives.  Industry giants like Dunkin Donuts and McDonalds accept these for in person payments and companies like Airbnb allow users to make payments via their online accounts.

It’s mostly giants now, but as implementation is becoming easier we’ll see NFC phone payments spread like wildfire the same way credit cards did.  Square has produced an NFC reader to let anyone accept Google Pay/Apple Pay on their device.

Of course, this is the business side of things.  Apps like Venmo and Cash App have exploded in popularity for easy fare-free P2P transactions.  Back in February there was some concern about peer-to-peer payments not existing in the new solidified app.  Since then the worry has been calmed and Google has announced that Google Assistant can be used to pay people back with Google Pay.

It’s only a matter of time before everyone is using some form of digital wallets.  In the coming years we’ll likely see both improvements to the current system as well as new innovations to completely replace it.  If you have any thoughts on improvements NFC systems could use let us know in the comments below!