Announcement

Collapse
No announcement yet.

help with philosophy...

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • help with philosophy...

    i'm writing a paper and need help. the question is "do you think a computer will be able to experience frustration?"

    we're talking about AI and how we will never build a "strong AI" because there is a level of "understanding" that humans experience that AI cannot.

    another option i have is to write a paper that is more complicated. the premise is that a Materialist believes that all emotions can be broken down into chemical experiences in the brain. that being said, is it possible to give someone medication or perform a medical procedure to make someone fall in/out of love or make them happy/unhappy permanently?

    the issue is that my arguments are straight forward and i can't see any more aspects of the question that i could argue for/against. if people give me good ideas and arguments i will cite you in my paper. you can use a fake name, not like my professor cares.

  • #2
    Never can happen. Computers are for porn and not emotion. That is all.

    Comment


    • #3
      Originally posted by Timma View Post
      Never can happen. Computers are for porn and not emotion. That is all.
      lmao. my professor used that example in class. he said "if i ask a computer to render porn, it has no say. it's a woman, it must oblige"

      Comment


      • #4
        I like your professor!

        Comment


        • #5
          It depends how you define experience. If you mean by the fact is can happen, the computer will experience frustration, just not in the sense that a person would. The computer may play a game, and realize it is not winning. If it then turns itself off, that wold be a form of frustration.

          As for the materialist, chemicals are king. New Love is the release of hormones in the brain. Once these chemicals fade, what remains is true lov.e If you keep pumping someone full of these chemcials, the NEw Love won't fade. Therefore, you can make someone fall into love.

          In order to make them fall out, if you can find how to elimiante these chemicals, you would force someone to fall out of love, and possible lose all happiness

          Comment


          • #6
            I say yes it is possible that someday a computer will reach a level near human consciousness. Animals like cats and dogs clearly feel frustration, they become anxious, scared and nervous as well. Even mice populations have demonstrated levels of frustration when increased numbers are kept in close proximity.

            If relatively "simple" creatures can display those emotional responses I don't think it's completely unreasonable to think a computer will be able to do the same at some point in the future.

            The tipping point will be a computer that can demonstrate true free will and not a basic formulaic response from programming.

            Comment


            • #7
              Originally posted by decadecadeca View Post
              I say yes it is possible that someday a computer will reach a level near human consciousness. Animals like cats and dogs clearly feel frustration, they become anxious, scared and nervous as well. Even mice populations have demonstrated levels of frustration when increased numbers are kept in close proximity.

              If relatively "simple" creatures can display those emotional responses I don't think it's completely unreasonable to think a computer will be able to do the same at some point in the future.

              The tipping point will be a computer that can demonstrate true free will and not a basic formulaic response from programming.
              Watched a show on how they took brain matter and hooked electrodes up to it it started communicating with each other; they did not have the technology to understand what it was saying but eventually I think there will be a time when they can take brain matter and put it into a robot application. I guess you would call that a cyborg.

              Comment


              • #8
                Originally posted by dbjmofo View Post

                we're talking about AI and how we will never build a "strong AI" because there is a level of "understanding" that humans experience that AI cannot. .


                What level of "understanding" is that?

                Computers can already learn from past mistakes. Land navigating robots placed in a maze will learn from blocked routes and not take that path again, essentially "learning" how to successfully exit the maze.

                "We have shown that evolution of learning-like properties is possible without
                modifications of synapse strengths, but simply by relying on complex internal
                dynamics of CTRNNs........However
                neurophysiological experiments have indicated that the way animals and
                humans perceive, classify, and memorize, for example in the olfactory system, is
                by transitions between chaotic attractors in dynamical systems formed by large
                numbers of neurons in the brain. These results correspond nicely with the
                view of memory and learning presented in this paper."

                http://infoscience.epfl.ch/record/63...l_evorob03.pdf

                I'd cite it properly but I don't know if you're using APA or MLA.

                Comment


                • #9
                  Originally posted by decadecadeca View Post
                  What level of "understanding" is that?

                  Computers can already learn from past mistakes. Land navigating robots placed in a maze will learn from blocked routes and not take that path again, essentially "learning" how to successfully exit the maze.

                  "We have shown that evolution of learning-like properties is possible without
                  modifications of synapse strengths, but simply by relying on complex internal
                  dynamics of CTRNNs........However
                  neurophysiological experiments have indicated that the way animals and
                  humans perceive, classify, and memorize, for example in the olfactory system, is
                  by transitions between chaotic attractors in dynamical systems formed by large
                  numbers of neurons in the brain. These results correspond nicely with the
                  view of memory and learning presented in this paper."

                  http://infoscience.epfl.ch/record/63...l_evorob03.pdf

                  I'd cite it properly but I don't know if you're using APA or MLA.
                  good article. we don't need citations. this is an informal paper. we simply need to fight for out point of view, then play devil's advocate and then conclude with why the counter argument is wrong. he jokingly said that if we use ANY citations that we'd fail the paper.

                  reading through your part of the argument, aka the land-navigating robots, you could argue that they were programmed to do so. that strong AI would be able to be placed on a planet with the simple directions of navigating. the robots in your example have been programmed with learning algorithms and such so they learn only because they have already been programmed to learn and told to not take routes that are difficult to get through.

                  Comment


                  • #10
                    Originally posted by alwaysgrowing View Post
                    It depends how you define experience. If you mean by the fact is can happen, the computer will experience frustration, just not in the sense that a person would. The computer may play a game, and realize it is not winning. If it then turns itself off, that wold be a form of frustration.

                    As for the materialist, chemicals are king. New Love is the release of hormones in the brain. Once these chemicals fade, what remains is true lov.e If you keep pumping someone full of these chemcials, the NEw Love won't fade. Therefore, you can make someone fall into love.

                    In order to make them fall out, if you can find how to elimiante these chemicals, you would force someone to fall out of love, and possible lose all happiness
                    but who is to say everyone experiences any emotions as the next person. for example, if you take antisocial personalities and look at their brain scans you'd see their pleasure is rooted in different places than the average person's and they also seek pleasure from greatly different things.

                    Comment


                    • #11
                      Originally posted by dbjmofo View Post
                      good article. we don't need citations. this is an informal paper. we simply need to fight for out point of view, then play devil's advocate and then conclude with why the counter argument is wrong. he jokingly said that if we use ANY citations that we'd fail the paper.

                      reading through your part of the argument, aka the land-navigating robots, you could argue that they were programmed to do so. that strong AI would be able to be placed on a planet with the simple directions of navigating. the robots in your example have been programmed with learning algorithms and such so they learn only because they have already been programmed to learn and told to not take routes that are difficult to get through.


                      Aren't humans programmed to learn?

                      Comment


                      • #12
                        ^ I'd love to believe that.

                        Comment


                        • #13
                          Originally posted by decadecadeca View Post
                          Aren't humans programmed to learn?
                          it's more of an evolutionary thing in humans and more of a technical thing in computers. a person doesn't go around to babies and tweaking their brains. and even then, in humans, the learning gets more and more technical so even the learning evolves in humans more than it does in computers.

                          Comment


                          • #14
                            Originally posted by dbjmofo View Post
                            it's more of an evolutionary thing in humans and more of a technical thing in computers. a person doesn't go around to babies and tweaking their brains. and even then, in humans, the learning gets more and more technical so even the learning evolves in humans more than it does in computers.

                            I disagree.

                            We're born with a genetic code, a set of programmed instructions. Over the course of our lives the people we interact with input data and we process it, learn from it and adapt.

                            No different than a computer. The more a person learns the more options they have in their programming menu, again no different than a computer.

                            Comment


                            • #15
                              Originally posted by decadecadeca View Post
                              I disagree.

                              We're born with a genetic code, a set of programmed instructions. Over the course of our lives the people we interact with input data and we process it, learn from it and adapt.

                              No different than a computer. The more a person learns the more options they have in their programming menu, again no different than a computer.
                              again, we program ourselves. we're fully capable of evolving on our own without having to do routine maintenance. with computers they can only really do so much as you tell them to. you can't take a computer programmed to work in a car factory to work in a wheat plant and expect it to adapt. it's programmed to do what it was intended to do.

                              the example the professor used in class was C3P0 and R2D2 and how they were the ideal example of "strong AI". they were allowed to move around on their own and had a sense of "Free Will".

                              Comment

                              Working...
                              X